Bots Now Outnumber Humans Online as Chrome Becomes Their Top Disguise
editorially independent. We may make money when you click on links
to our partners.
Learn More
The internet is now more machine than human.
Bots accounted for 53% of observed web traffic in 2025, leaving humans at 47%, according to the 2026 Bad Bot Report from Thales, a global cybersecurity and digital security company. Bad bots alone made up 40% of total traffic, up from 37% in 2024.
The AI numbers make the trend more urgent. AI-driven bot attacks surged 12.5x in a single year, while automated traffic increasingly resembles normal browser behavior. According to Thales, AI is “erasing the line between legitimate and malicious activity,” forcing companies to judge what traffic is doing, not just what it claims to be.
Trillions of blocked bot requests
The report counted 17.2 trillion blocked bot requests in 2025. At that scale, automation becomes part of the daily workload behind online services.
Much of it moves through routine pathways: sign-in screens, checkout pages, booking tools, support forms, and API calls. Companies can end up treating bot activity as customer activity, then making decisions based on a distorted picture of demand, fraud, or system performance.
A bot problem can become a slow site, a larger cloud bill, a messy fraud queue, or a customer experience issue before it looks like a security incident.
AI traffic is getting harder to classify
AI agents are forcing security teams to revisit one of web security’s basic questions. What traffic should be allowed through?
A company may want its public pages visible to AI assistants and search tools, but the same automated access becomes riskier when it reaches pricing data, account pages, booking systems, or APIs.
Blocking too much can cut off useful traffic. Trusting too much can give abusive automation room to operate. Early signs of overlap showed up in the data, with more than 10% of detected AI fetcher sessions and nearly 9% of AI crawler sessions triggering bad-bot detection rules.
The visible numbers may also be incomplete. Modified or self-hosted AI models can avoid declaring themselves as AI agents, while attackers can tune those systems for scraping, probing, or fraud. That means the measurable AI traffic may capture only part of the real exposure.
Chrome became the favorite disguise
Google Chrome was the most commonly claimed browser by bad bots in 2025. Forty-one percent of bad-bot traffic declared itself as Chrome, up from 39% in 2024, while Android Browser held second place at 17%.
Chrome is also the browser that companies expect to dominate normal traffic. That gives attackers a familiar wrapper for automated activity moving through login pages, checkout flows, account areas, or APIs.
Older defenses have less to work with in that environment. User-agent strings and basic browser checks can identify the browser a request presents. They cannot prove who is behind it or whether the activity is tied to a real user.
Traffic identity is not enough
The next phase of bot defense depends on reading behavior inside real workflows. A request may pass a basic browser check and still hit a login system thousands of times, scrape pricing data, hold inventory, trigger SMS codes, or pull from an API far beyond normal use.
Thales’ Tim Chang framed the issue around behavior, noting that “The challenge is no longer identifying bots. It’s understanding what the bot, agent, or automation is doing, whether it aligns with business intent, and how it interacts with critical systems.”
As AI-driven automation grows, more digital traffic will fall into a gray area between legitimate access and abuse. Companies that cannot make that distinction risk treating bot activity as real demand, real engagement, or real users until the cost shows up in security, performance, fraud, analytics, or trust.
Also read: Tech firms shed 33,000 jobs in April as the industry keeps shifting capital toward AI initiatives.