A bot is a few lines of code that runs automated tasks over the internet. There are tens of millions of bots out there, and their roles are as diverse as human users. However, the fact remains that a growing percentage of bots are becoming increasingly malicious, and directly affecting the income and reputations of eCommerce leaders, content creators, and online service providers. A recent Bad Bot Report details the increasing automation dominating the cybercrime landscape.
What Makes a Bot Bad?
Many bots provide beneficial services – for both businesses and online users alike. For example, good bots drive search engine reliability. Online services and content is discovered and listed by swathes of bots such as Googlebot and Bingbot. These search engine crawlers use indexing to match user queries with the best site. Prospective customers can now find you much easier, and it gives online creators a far stronger presence than they otherwise would. Thanks to these bots, online businesses are easier to find by genuine customers.
Whereas good bots enhance the user experience, bad bots aim to disrupt or profit at the expense of genuine users. One of the most notorious examples of bad bots are price scrapers. These are used by competitors to scrape your prices, lower them slightly, and price you out of the market. The lower prices also place your competitor at an advantage, as SEO prioritizes lower cost. It directly cuts into the lifetime values of your customers, leaving you with declining conversion rates. The aggressive nature of these bots can also create website slowdowns and downtime, further damaging the brand.
Another example of bad bots are credential stuffers. These attempt to pull off account takeover attacks for users of your site. This can entail the testing and matching of already-stolen credentials via your site, or the rarer bot that simply attempts to brute-force fraudulent logins. The fallout from these bots includes financial fraud and account lockouts. If these bots are successful, customer complaints will start to pile in, greatly affecting future revenue and customer loyalty. The telltale sign of credential stuffers are increased rates of failed logins, customer account tickets, and chargebacks.
Denial of Service bots, on the other hand, aim simply to disrupt your site’s runtime. Here, large numbers of bots may flood your site, causing a brownout or temporary downtime. This directly prevents customers from interacting with your site, and can occasionally cause abnormal behavior in particular site functions, such as login or product pages. This sees incredibly high bounce rates, and a subsequent increase in customer service complaints.
As we’ve seen, some bots are incredibly useful, so it’s not as simple as just banning all bots. Instead, your site needs to defend against the bad whilst still allowing the good. Recognizing the difference is essential in a bot prevention solution, but it is becoming harder as bad bot behaviors become increasingly sophisticated.
How Bots Are Changing the Threat Landscape
In 2021, bad bots made up 27.7% of all global web traffic. This is the highest it’s ever been, up from 25.6% in 2020. To give a sense of perspective, only half of online traffic is human.
The quantity of bot attacks also rose throughout 2021, with increasingly advanced techniques and intensity. By the end of the year, bot traffic had hit staggering highs: bad bots accounted for 30% of all traffic in December. The holiday season is always frantic with bots, and the emergence of the Log4j vulnerability contributed to this major spike. This peaked in January 2022, as a global job listing website came under fire from the largest bot attack on record. Over four days, a web scraping attack pummeled the provider with over 400 million requests – almost every single request originated from a unique IP address. The wide range of IP addresses is an advanced form of bot defense, as was the fact that each IP was making just 10 requests per hour. This prevents the site from triggering traditional bot defenses, as they are all below the rate-limit threshold. For context, the resultant traffic spike was 30x greater than the site’s regular traffic.
The threat of bad bots is greatly dependent upon your own sector. Sports websites at the moment are absolutely inundated with bad bots; websites in this sector saw an astounding 57.1% of their incoming traffic come from bad bots throughout the year. This is likely thanks to the sudden scheduling of high profile, intensely anticipated events like the Euro 2020 and the Tokyo Olympic Games. The specific types of bots largely centered on account takeover fraud, odds scraping, and comment spam.
Next Generation of Cybercrime: Managing the Threat of Bad Bots
The increasing numbers of automated, highly technical malicious bots stand in sharp contrast to cybersecurity employment rates: understaffed and overworked, it is impossible to manually block out all bad bots.
Instead, a comprehensive anti-bot solution requires a healthy mix of risk identification and automatic bot mitigation. Firstly, identify potential risks to your website. Understand that high-quantity marketing campaigns bring more bots. The risk is directly scalable with the exclusivity of said product. A limited, highly-desirable product may seem like a win for the brand, but scalpers are notoriously agile, leading to greatly inflated prices and disgruntled customers. Regardless of the product itself, know that – if you announce a date and time for a coveted product launch – bots will always be faster than even the most eager genuine customers.
Outside of product variables, realize that there are multiple ways in which a site can become a bot target. Login functionality is a haven for credential stuffing attacks; checkout forms are incredibly useful for card cracking bots, as they brute-force their way toward financial fraud. In the same way, voucher or gift card options allow bots to steal money from legitimate gift cards.
Once you’ve identified the key risks hidden within your site, crack down on your bot management strategy. Make the choice between a larger-capacity server to handle all the increased traffic, or a bot solution provider, which will cut that traffic down to real users.
For login pages, define a limit for failed login attempts, then monitor. Set global thresholds too, not simply time-based ones, as advanced bots use low and slow techniques around the clock. Outside of keeping an eye on failure states, a high-quality bot protection provider will allow rapid and evolving protection, as bots inch ever closer to genuine human behavior.