The Rise & Fall of Single Request Bots

December 19, 2019
single request bots

The cat and mouse game between bot operators executing automated attacks and prevention vendors has become increasingly sophisticated, with each side applying more intelligence to their respective prevention or malicious efforts. Prevention vendors collect increasingly sophisticated telemetry while bot operators reverse engineer to fully understand the detection techniques, creating single request bots that blend in among a sea of valid values.

A perfect bot, commonly referred to as a single request bot, is a tool that generates a perfectly unique, anonymized request that is different from other responses, yet is close enough to the universe of legitimate users such that it will be indistinguishable for prevention vendors. Bot operators know that prevention vendors can use browser fingerprints, IP addresses, credential/payload telemetry, and the elements in an HTTP request to detect malicious intent. The perfect single request bot attempts to randomize and anonymize its telemetry to effectively blend in with legitimate human transactions.

The Rise of Single Request Bots – How are They Created?

To create a single request bot, bad actors will analyze exactly how the site or application/API functions to more fully understand the application telemetry that needs to be submitted, copied from legit users, and spoofed.

For example, when attacking a web application this would mean understanding popular browser profiles and generating browser fingerprints that look exactly like the profiles, while diversifying values like User-Agents, Language tokens, etc. and ensuring that any telemetry gathered by JavaScript running in the browser is either blocked, spoofed or randomized. When attacking a mobile endpoint, the bad actor may decompile the app or scour the published API documentation to understand all the required request parameters, and then generate millions of unique requests. Finally, many single request bots also will fetch valid device tokens, session identifiers, and other stateful identifiers from the app or web server itself. In this case attackers, aren’t spoofing values, but actually using the app as intended, at large scale, all while appearing to look like millions of legitimate users.

Once a bad actor fully understands how the target application operates, the next step is to enable the attack. There is an ecosystem of services available that enable single request bots, particularly services revolving around tools and infrastructure.

  • Tools: These services exist to make it incredibly easy to create single request bots. This includes forums where off-the-shelf tools are sold, customized and traded. These tools will include the
    techniques described above out-of-the-box along with Captcha bypass capability and top-of-the-line Captcha solving services. Finally, another set of enablers are markets like the Genesis Market, where browser fingerprints can be bought and sold, both allowing attackers to blend in with humans and understand what they look like from the perspective of prevention vendors.
  • Infrastructure: Next, services such as Bulletproof Proxy Providers are purpose built for large scale IP rotation, randomization and blending in with a pool of legitimate humans. The BulletProof Proxy endpoints that attackers route their traffic through take care of the IP rotation for them, increasing the ease of use and lowering barriers of entry for attackers.

The combination of these sophisticated techniques, sharpened over time by the ongoing game of cat and mouse, combined with these enabling services like markets, tools and Bulletproof Proxies has led to an explosion in the use of single request bots to execute automated attacks.

The Fall of Single Request Bots – How They are Detected and Prevented

As single request bots continually adapt their behavior to blend in with legitimate traffic, common detection techniques based on rules and static heuristics are becoming less and less effective. Of our Four Pillars of Detection Framework (Credentials, Tools, Infrastructure, Behavior), the most powerful single request bot detection technique is the Behavior Pillar – looking for the unique, yet subtle characteristics that distinguish a single request bot from legitimate traffic. Detecting Tools, Infrastructure and Credentials are key factors, but the Behavior pillar is where the most sophisticated detections take place.

Fundamentally, the behavior of a bot attack is difficult to change, because doing so distorts the unit economics that make bot operating profitable or successful in the first place. For example, if an attacker is using real, freshly generated cookies fetched directly from the app that have some element of a timestamp incorporated, they are faced with a dilemma. For the attacker to throttle down and use values that are somewhat aged based on the timestamp, they would have to slow the bot down, consuming additional resources and costs to the point where it may no longer be profitable. This is an example of a technique used by single request bots that can be detected by our Behavioral Pillar.

In another example, if attackers want to use “sticky” IP sessions while using Bulletproof Proxy residential providers (detected in Infrastructure), they cannot outsource the IP rotation to the proxy provider, as that manner of rotation is easily detected by our Behavior Pillar. The attention to detail associated with IP rotation and when it occurs is time they must invest in customizing the bot, a behavior that is time-consuming to change and one that represents its’ own opportunity cost.

Finally, in the battle of automated attacks like credential stuffing, attackers rely on the law of large numbers to make sure they evade detection and get a return on their investments. As prevention vendors, we can turn that paradigm on its head and use the same large data set of user credentials to aid detection. An attacker may learn how to blend in with a particular type of user, particularly through fingerprinting markets like Genesis.

Prevention vendors have the unique advantage of understanding not just what each individual good user looks like, but also what a basket of good users look like, as we have gained over years of analysis of customer data, and we can use that information asymmetry to our advantage. As attackers attempt to rotate through a universe of valid values, the manner of rotation and randomization itself is in fact a feature that can be used in machine learning models to tell us if a client is lying about who they are. CQAI analyzes requests in batches, comparing them both to known good and to each other, with a focus on the Four Pillars of Detection allowing us to use this behavioral information to mitigate the threat of single request bots.

Tags

Automated Bots

About the Author

The CQ Prime Team

The CQ Prime Team

Resources

Browse our library of datasheets, research reports, blogs, and archived webinars to learn more about our Application Security Platform.

27 March 2020

Bulletproof Proxies: The Evolving Cybercriminal Infrastructure

Read Now
29 March 2020

Zoosk: Preventing ATOs and Romance Fraud

Read Now

Subscribe to our blog