Google’s Gemini blocked billions of bad ads. That’s good news — but not enough
Google has published its yearly Ads Safety Report for 2025, in which it disclosed the numbers behind bad ads, with its focus being on the key role of Gemini-powered tools in identifying and stopping them. And when you look at these numbers, it’s hard not to feel impressed, at least at first glance: over 8.3 billion bad ads blocked or removed, 4.8 billion ads restricted, and just under 25 million advertiser accounts suspended. Google emphasizes that 99% of all policy-violating ads were blocked before they could ever be served to users — again, claiming that Gemini’s role was instrumental in that. We are not here to deny credit where it’s due: fighting bad ads is important. But this is also something Google is expected to do as the platform owner. The results are commendable, but they also highlight how predatory and hostile the advertising ecosystem can be.
How AI helps Google detect ‘bad ads’
Google’s main claim in favor of an AI-based approach when evaluating an ad’s legitimacy is that it doesn’t base the enforcement decision purely on keywords, but rather can understand and analyze more complex signals like account age, behavioral cues, and campaign patterns. Bad actors often design their scam ads to mimic legitimate ones, and they take advantage of generative AI to quickly mass-produce different variants, so that some would eventually trick the old pattern matching-based enforcement systems.
Before AI, these older systems looked more like a checklist, checking whether an ad contains certain words, symbols, URL mismatches, formatting tricks, or policy-triggering product categories. Does the ad use banned wording? Does the landing page match the display URL? Does it contain suspicious formatting like F₹€€!? These checks are useful, but they are also fragile and much easier to circumvent by inventive word choices and other clever ploys. For example, something like ‘Lose 20 pounds in a week!’ would be rather easy to detect and flag even under the old system. But imagine a landing page full of false claims, fake testimonials, and hidden subscription terms — and it becomes much harder. No single element indicates a scam, so the checklist approach has a high chance of approving the ad. But an AI system that understands context has a better chance of marking the ad as ‘bad’ with a higher degree of certainty. A good analogy would be airport security marking an individual as suspicious not just based on the illegal items they have in their baggage (the old system), but instead because of their weird behavior, like using different names, only buying one-way tickets, or changing routes frequently.
Gemini takes all this context into account to determine the intent behind the ad and is (at least according to Google itself) very good at identifying scams — over 600 million ads associated with scams were removed and 4 million accounts were suspended for scam-related activity in 2025. Another thing going for Gemini is the ability to automatically process user feedback. According to Google, its teams were able to take action on four times as many user reports as in 2024 as a result of AI integration.
AI shift is happening in ad blocking
The clash between the old and the new approaches to detecting bad ads in Google’s ad ecosystem is not without similarities to ad blocking in general. Many years ago, blocking an ad was as simple as matching the server used to deliver the ad to a set list of ‘bad’ domains. Anything coming from adserver.example.com would get blocked, and that’s that. DNS filtering still works more or less in the same way: it is less flexible, but very efficient, lightweight, and system-wide. Today, ad blockers face entirely different, much harder challenges. Ads and other unwanted requests often blend in with the useful content. Modern filtering rules are nothing like the short, simple rules from the early days of ad blocking. They are extremely complex, and filtering syntax resembles a literal programming language more than anything else.
Ad blocking syntax has been constantly evolving to keep up with ever steeper challenges — so far, rather successfully. But the fact that the traditional, filtering rules-based approach hasn’t been replaced by AI so far doesn’t mean that ad blocker developers have dismissed the thought of using AI in ad blocking. On the contrary, they have been exploring AI’s potential in the context of blocking ads, and often in quite unexpected ways. Attempts to use various forms of machine learning (ML) for ad blocking go as far back as at least 2019, when Brave developed AdGraph, a tool that blocked ads and trackers in real time. It showed surprisingly high accuracy, but required deep browser integration and constant maintenance, so it didn’t take off in popularity. There were a few other experiments and research projects that tried to take advantage of ML, but none managed to achieve widespread adoption.
In recent years, with the rapid advancement of AI technologies, the idea of using AI for ad filtering has come up increasingly often. For instance, it was one of the main points of discussion at the last year’s Ad Filtering Dev Summit. At AFDS 2025, several speakers touched on the role of AI in the ad-blocking landscape in their presentations — Ritik Roongta from NYU spoke about how AI can help evaluate ad content, especially for allow-listed ads that may be non-intrusive but still harmful, and Anton Lazarev from Brave explained why ad blockers will stay highly relevant even in the era of AI agents and agentic browsers.
AdGuard’s experiment: Can an LLM spot an ad?
AdGuard has been exploring the same direction. Maxim Topciu, AdGuard’s Web Extensions division Team Lead, has conducted his own research to answer the question: can a blocker understand what appears on the page and decide whether it should be hidden? As we already mentioned, filter lists remain powerful but have limitations: they require manual maintenance, struggle with native advertising, and face additional constraints, like the ones introduced under Manifest V3. Wouldn’t it be great if an ad blocker could determine what is an ad and what is not all by itself? The idea itself wasn’t new, as is evident from the past attempts by Brave and others to achieve similar results, but Maxim went a bit further. One of the advantages of LLMs is that they can make it relatively quick to turn an idea into a working prototype. So Maxim created not one, but three of such prototypes, each analyzing and blocking ads in its own way.
Maxim tested the prototypes on X’s feed. One blurred all the posts, analyzed their content, and then unblurred the ‘good’ ones. The second prototype did the same, but analyzed each post as an image, not as a block of code. The third one allowed the user to set certain criteria, and the LLM would check if the post matched them before deciding whether to hide the post or not. All three approaches worked, but all had their own drawbacks — after all, they were prototypes and very far from being end products.
The experiment showed that AI-based ad blocking is technically possible, but at the same time it became apparent that AI is not yet ready to replace the traditional filter-based approach.
Google’s use of Gemini to identify ‘bad’ ads and AdGuard’s own experiment, despite all their differences and despite serving different purposes, are pointing in the same direction: ad filtering is becoming more semantic. AdGuard’s experiment showed that LLMs can classify content by meaning, not only by selectors or URLs. A vision-based approach can analyze what users actually see, which helps when text is minimal or HTML is obfuscated. The crux of the decision when blocking an ad gradually shifts from “Does this web element match a rule?” to “What is it trying to do? What was the intent behind it?” If you could reliably detect every ad, sponsored post, tracker, and scam by determining their intents, there would be no need for filtering rules. But, evidently, we are not there yet. LLM-based approaches are still largely limited by cost, speed, and practicality. It appears that, while the role of AI in ad blocking is going to grow, it is not going to realistically replace traditional ad blockers in the near future, but rather complement them where filtering rules alone struggle.
Platform safety is not the same as user control
But this is also where the comparison between Google and independent ad blockers ends. The fundamental difference between Google’s use of Gemini and ad blockers’ use of AI lies in their goals. Google uses AI to enforce its own ad policies, while ad blockers exist to enforce the user’s preferences. Currently, users set these preferences by selecting the desired filter lists or by adding custom filtering rules. But AdGuard’s experiment showed that it is entirely within the realm of possibility to introduce user-controlled criteria to a future AI-based ad blocker, too. This is different from Google’s algorithms that do, indeed, block or restrict malicious and dangerous ads — which deserves praise — but doing so also lines up with Google’s own interests. Users don’t have any say in what exactly gets blocked and what comes through. An ad doesn’t have to violate Google’s guidelines to be unwanted. There are plenty of reasons why someone wouldn’t want to see an ad: it may be distracting, privacy-invasive, heavy, or simply irrelevant to the viewer. This is where the roots of the conflict lie: Google’s only concern is whether an ad is allowed inside its ecosystem and follows its rules. From the user’s point of view, the question is broader: do I want this ad on my device?
The anti-scam work that Google does is necessary, but also expected: it is its direct responsibility. The Ads Safety Report should not be read as a final answer to the problem of bad ads. Blocking billions of ads is cool, but even more ads remain. These numbers really put into perspective just how much harmful or questionable material flows through the online ad ecosystem. And this is where the true reason behind Google’s efforts lies. Google is an ad company first and foremost. Its business model is not based on selling Android phones or anything like that — it is centered around the ad ecosystem it has built, and most of its other, numerous branches support it in one way or another. Google has shown time and again that protecting its advertising business weighs heavily in its product decisions. Its safety work is no exception: it is also a necessary concession to keep users within the Google ad ecosystem.
We are not trying to say that Google’s anti-scam efforts are meaningless — of course, it’s better to have no, or close to no, fraudulent and dangerous ads on your phone. It’s even better when you, the user, are the one who controls what else you want or don’t want there. Google’s Ads Safety Report demonstrated how efficient AI can be at identifying unwanted content. Now it’s ad blockers’ turn to find an even better use for this powerful weapon and make it serve a good cause.







