Gmail Farming and Credential Validation

March 10, 2021

Even after 20 years in the security field, and nearly two years here at Cequence I am continually surprised at how ever-evolving bots impact our customers. It definitely keeps us on our toes as we try to understand how each attack component (Tools, Infrastructure, Credentials and Behavior) evolves. Our previous infrastructure research on Bulletproof Proxy networks gave us some insight into how threat actors can jump from place to place to appear as normal traffic or avoid IP block-lists. And our recent blog on Bots-as-a-Service highlights the continued commercialization of the bot market.

In this blog, I wanted to dig into the creation and use of valid credentials. For a bot to be successful, threat actors need a “clean” IP address (supplied by Bulletproof Proxies), and they need valid credentials. As more and more systems are tied to credentials, and because credentials are continually stolen and used for automated attacks, Google is now rating email accounts and that rating then impacts the CAPTCHA services a user is presented with. The goal is to present a human-backed account with a different, and potentially easier CAPTCHA experience (single click versus more difficult challenges).

Rather than being deterred by this new feature, threat actors launching attacks are leveraging it to expedite the attack. The validated accounts are used against the target application, a retailer with a high-demand item for example. The clean Gmail account is used to create a shopping cart where the threat actor can then add their items and proceed to checkout. The (simplified) CAPTCHA will fire but the bot runner will just solve the challenge with a single click and move on.

Validating Email Credentials – It’s Easy, Right?

The process to validate an email account that fires on a single click requires an email that has “interacted” with the world by sending emails, viewing YouTube videos, and solving complex CAPTCHAs. To see how a threat actor might perform email validation, I used a spare Chromebook to set up a series of (fake) Gmail accounts. This meant coming up with names, Gmail addresses, passwords and birthdays. My first glitch was with my Chromebook – Gmail sign-up from a regular web browser requires a phone number, which would mean another way to track the user, and thus, we want accounts without a phone number.

The Chromebook has a login/signup before getting into the machine, this works as a great way to get around the phone number requirement in the browser. So I created 4 different accounts, then sent emails and started watching YouTube from each account. On the 5th creation, something odd happened, I was presented with a series of secondary validation measures. CAPTCHAS, account warnings, things seemed to indicate that my IP address was being flagged as having created too many accounts.

Thinking like a threat actor running a bot business who needs a way to create more accounts, I acquired a few Android Pay-As-You-Go phones that have WiFi interfaces and don’t require SIM activations. Using the free WiFi outside of my local hardware store, I started my devices and signed up a few more accounts and started watching YouTube. Lesson learned: I was only able to create one account per device per IP. I needed to find a bank of IPs on free WiFi.

The solution was easy – I stopped by Starbucks, the local police station, Target, Walmart, and other retailers – the list of available WiFi is pretty large at this point. I was able to create four new accounts per IP per device, which now needed to be validated. This process is much longer and ongoing: login, send emails, watch YouTube videos and share videos via email. I haven’t automated this process in any way, it is just random searches, emails, etc. The validation process has resulted in those “1 click captcha” accounts the bots are hungry for.

Most of the “farming” can be automated, but performing this all by hand gave me a good sense of where the trust comes from. Google is looking at its Gmail base and determining if you are human and providing that data to others. My exercise is one where I establish human behavior. Now I need to test if the accounts are flagged as human and finally how to get them flagged as bots.

Farmed Gmails in Use

As you read this you might be thinking “How can this process scale so a threat actor has enough accounts to be effective?” Here’s how: each Gmail account actually supports four email addresses: the primary, the three others. In a recent attack against one of our customers, our CQ Prime Threat Research Team noticed this practice in action – they were using john.doe@gmail.com, then j.ohn.doe@gmail.com, then jo.hn.doe@gmail.com – all of which carry the same single click captcha.

Now that I have farmed 25 accounts, I can use those accounts and their permutations to sign up for accounts at various places. If I was hunting the latest consumer product, I can set up accounts on an online retailer’s website. Then I can scrape the site for availability of the hot item, add it to a shopping cart and go through the checkout. The idea here is to have as many accounts on the retailer as possible to maximize the possibility of adding the item to the cart. So, now I have to take my 25 emails and come up with 200 or more accounts. All of this requires spreadsheets and databases, in order to keep the usernames and passwords available for use.

To automate this effort, threat actors will use tools like Essentials or AYCD to load a series of fresh Gmail accounts into a “farmer” to make them look human and then use them to establish approved accounts on various retailers. Being a human in a bot’s world means doing time-consuming leg work that can require creativity. Once the manual solution is perfected, it can always be automated to eliminate the tedium required to purchase that hot item everyone wants – before a real human does. There is a saying that farming isn’t for everyone. My Gmail account farm is doing great and a fresh new crop shows up every few days.

To learn more about how Cequence Bot Defense can accurately identify malicious transactions, even when attackers retool, watch this video on our behavioral fingerprinting approach:

Tags

Automated AttacksBot AttacksCAPTCHACredentialsFake Account Creationgmail

About the Author

Jason Kent

Hacker in Residence

22 September 2021

Multi-Tenant SaaS Authentication Bypass or Works-as-Designed?

Read More
15 September 2021

Improving Development and Security Collaboration With API Specification Frameworks

Read More
13 September 2021

Some Recent API Security Related Gaffes, And How They Might Have Been Avoided

Read More
2 September 2021

Tales from the Frontlines: API Sentinel Drives Security Collaboration

Read More
30 August 2021

Guest Blog: API Security – Off to a Booming Start, But We’re Not Done Yet

Read More

Subscribe to our blog