Blog

What the Rest of the Industry Isn’t Telling You About AI-Powered Bot Attacks

April 17, 2026 | 9 MIN READ

by Gus Siefker

A stylized image of an ai-powered bot.

I sat in on one of the most packed rooms at RH-ISAC this week. The session on AI-powered bot attacks drew one of the biggest crowds of the summit. The content was solid. Side-by-side log examples, a clear framing of the detection challenge, a reasonable takeaway about smarter client-side controls. Two things have been sitting with me since.

First, that entire framing is about a threat surface that is now shrinking. The AI traffic that matters most in agentic commerce does not run through a browser at all. No client to instrument, no challenge to present, nothing to fingerprint. If your bot defense still starts at the client, you are defending a layer the traffic has already left behind.

Second, the “block the AI bots” reflex is actively hurting you. GPTBot, ClaudeBot, PerplexityBot, and the rest of the AI crawler fleet are the new Googlebot. If they cannot index you, you do not show up in the answers those assistants give to your prospective customers. Crude AI-bot blocking is not security hygiene. It is opting out of the fastest-growing discovery channel in B2B and B2C.

I know all of this because we just spent 31 days proving this very point at a Cequence customer. So let me lead with the outcome.

3.51 Million Blocked AI-Agent Requests in 31 Days

One of our enterprise customers, a leading global consumer technology brand, came under coordinated attack from a distributed ecosystem of AI agents and automation tools hitting their authentication APIs. The agents were embedded in home automation platforms, AI assistant connectors, and custom developer tools. Real end users, real AI assistants invoking the attack code indirectly through tool-call frameworks, machine-speed authentication from thousands of IPs worldwide. Ordinary traffic by every surface signal. Automated abuse by every behavioral signal.

Over a 31-day window, Cequence blocked 3.51 million unauthorized authentication attempts, peaking at 241,000 blocks in a single day. Zero client-side instrumentation, zero JavaScript SDKs, zero client-side challenges of any kind. The open-source ecosystem behind the attack was deprecated and abandoned by its own maintainer partway through our defense window, with no functional bypass ever published. And while all of that was happening, the legitimate AI crawler traffic our customer depends on for AEO visibility kept operating, unhindered by the active defense.

The Attack Arrived on Three Channels at Once

What makes this case study matter is not just the volume. It is the channel mix. The same campaign showed up across all three of the vectors that break a client-side defense stack, and our platform handled all three the same way.

Channel 1: Direct API interaction

Some of the traffic hit the APIs directly. Native HTTP requests, valid credentials, plausible headers, no browser anywhere in the loop. The classic “there is no client to instrument” problem.

Channel 2: Agentic AI

Some traffic came through AI assistants and agent frameworks calling the attack library as a tool on behalf of end users. This is the MCP-adjacent pattern every enterprise is about to see more of. The end user asks an AI assistant to perform a task, the assistant invokes a tool, the tool authenticates to the API at machine speed. From the defender’s perspective, this traffic is agent-driven even though no human ever touched a bot tool directly.

Channel 3: Browser automation

When the direct paths were blocked, the attackers pivoted to browser automation. Headless Chromium sessions, scripted to simulate human login flows, hoping the real browser shell would restore plausibility. It did not. Real browsers driven by agents still behave like agents when they interact with APIs.

All three channels showed up in the same 31-day window against one customer. A client-side defense stack would have missed the first two entirely and been fooled by the third.

One more outcome worth naming. The attacker community ran a real-time adversarial research effort against us, publicly, on GitHub and Reddit. They rotated user agents. They switched authentication flows. They migrated to self-hosted proxies. They pivoted to headless browser automation. They never figured out that a bot management platform was in the path. They blamed generic infrastructure and burned their ecosystem chasing the wrong theories. A defense that telegraphs itself is a defense the attacker can calibrate against. Every client-side challenge — a fingerprint probe, a puzzle, a behavioral biometric check, an invisible proof-of-work — telegraphs. What we deployed does not.

The AI Crawler Problem the Industry Is Not Talking About

Here is the other thing the industry framing misses, and it is going to show up on someone’s revenue number this year. GPTBot. ClaudeBot. PerplexityBot. Bingbot for Copilot. Google-Extended. These are not the adversaries. They are the new distribution. When your buyer asks their AI assistant “which security vendor has the best agent protection,” the answer comes from whatever those crawlers have been allowed to read. If you blocked them six months ago when the “AI bot” conversation first heated up, you are no longer showing up in that answer. Your competitor is.

This is Answer Engine Optimization, and it is rapidly becoming a peer of SEO in the marketing stack. SEO rewarded patience and keywords. AEO rewards access. If the model cannot crawl your docs, your blog, your case studies, and your product pages, you do not exist in the answer. Traditional bot management tools have no native concept of “this crawler is revenue-positive, that agent is revenue-negative.” They block on signatures and fingerprints, which means the easiest thing for a security team to do is block them all and call it hygiene. That decision costs you pipeline every week it stays in place.

The customer I described did not pick between security and AEO. They got both at the same time, from the same platform, because the detection layer is behavioral. Legitimate AI crawlers behave like crawlers. Malicious AI agents hitting an authentication endpoint behave like attackers. The two look nothing alike once you stop looking at headers.

What This Means for Agentic Commerce

This is not an edge case. It is a preview of the default. Agentic commerce is here — agents booking travel, comparing prices, placing orders, moving money, querying inventory on behalf of real customers. It is legitimate, revenue-generating, and scaling fast. These agents operate in the same channels we just saw used to attack one of our customers: native API calls with valid tokens, indirect invocation through AI assistants and tool-call frameworks including MCP, and real browsers driven by agents where the fingerprint is genuine because the browser is genuine. For all three, client-side defenses tell you nothing.

Your next customer might discover you through an AI assistant’s answer, which only happens if you let the right crawlers in. That same customer might then instruct their AI assistant to transact with you, which only happens if you let the right agents in. Meanwhile, malicious agents are hitting the same APIs with valid-looking traffic you need to block. Three flows, all automated, all arriving on channels client-side stacks cannot see, all requiring different decisions. A client-side stack cannot make any of them.

What the Industry Sells You vs. What Actually Works

I consistently see three patterns across prospect conversations. First, vendors anchor their AI bot story to the easy demo. A spoofed Chrome User Agent against a headless fingerprint. Clean signal, clean catch. But the most sophisticated AI-agent traffic in production today does not look obviously bot-like in its headers or user agents. It looks plausible. It uses valid credentials. It comes from clean residential IPs. It arrives through AI assistants your users actually trust. The easy demo is marketing, not defense.

Second, vendors push client-side defenses as the answer. The category is broader than it used to be — JavaScript SDKs, device fingerprinting, behavioral biometrics, puzzle solvers, invisible proof-of-work, the whole family of “make the client prove something.” None of it works when there is no client to probe, and most of it is already being solved or sidestepped by the agents themselves. Modern AI has no trouble with a puzzle a human would find annoying. If your bot management strategy depends on the client doing something, you’re missing the traffic that matters most.

Third, vendors offer you a “block all AI bots” switch and let you take the AEO damage quietly. This is the one that will show up in next year’s board meeting, in the pipeline review, as a drop in inbound that nobody can quite explain. Crude blocking is easy to configure and expensive to own.

The Questions to Ask Your Current Vendor

Coming out of RH-ISAC, I am asking prospects to put their current bot defense through four tests before their next renewal. What does the product detect when the request comes from a real authenticated agent with a valid token and no browser? How does it handle traffic invoked indirectly through AI assistants and tool-call frameworks, where the end user is real but the authentication is automated? How does it distinguish legitimate AI crawlers that drive AEO visibility from malicious scrapers, without forcing you to pick between security and discoverability? And what does the vendor have to show for defending a real enterprise against a real AI-agent attack across all three channels, not in a demo, in production?

These are not gotcha questions. They are what every enterprise with real agent traffic and real AEO exposure will need answered in 2026.

The Defense Has to Move Where the Traffic Went

The industry conversation about AI-powered bot attacks is stuck on a threat model that is already too small, and on a crawler strategy that is already costing you revenue. Client-side defenses were built for a browser-centric web. Agentic commerce is not going to be browser-centric. The traffic, legitimate and malicious, is arriving through native APIs, through MCP and AI-assistant tool calls, and through agent-driven browsers. Defense has to move with it, and it has to do so without client-side code, without telegraphing itself, without blocking the AI crawlers that now drive discovery, and without blocking the legitimate agent traffic that is about to drive a meaningful share of revenue.

That is what our customer did. 3.51 million blocks, 31 days, zero client-side code, zero legitimate customers disrupted, zero AI crawlers disrupted, zero attacker awareness that a defense platform was in their path — across all three attack channels simultaneously.

If you want to understand how, let’s talk. Cequence Bot Management and our secure agentic AI enablement approach were built for this exact traffic pattern. Wouldn’t you rather have this conversation now than after your next board meeting?

Gus Siefker

Author

Gus Siefker

Vice President of Sales, Americas & APAC

Gus Siefker, at the helm of Cequence's Americas and Asia Pacific regions, stands as a seasoned sales leader boasting over 25 years of expertise in Enterprise sales. His track record shines with the adept establishment of robust Go-To-Market (GTM) teams for SaaS-based startups. Renowned for infusing vigor and enthusiasm, Gus diligently ensures the delivery of top-notch results, catering to the needs of both customers and his teams.

Related Articles