Google’s AI ad cops now shoot the message, not the messenger

April 16, 2026
5 min read
Illustration of Google’s AI systems scanning online ads for policy violations

Google’s AI ad cops now shoot the message, not the messenger

Google has quietly changed how it polices the world’s biggest advertising machine – and the shift says a lot about where AI power really sits on the modern internet. Instead of aggressively banning advertisers, Google is increasingly letting accounts live while its AI filters strip out individual “bad” ads at scale.

This may sound like hygiene work deep inside the ad stack, but it touches almost everyone: brands, small businesses, publishers, regulators and any user who has ever clicked a sketchy ad. In this piece we’ll unpack what Google actually changed, why it’s happening now, how it fits into a wider AI‑driven ad arms race, and what it means in particular for European markets and regulators.


The news in brief

According to reporting by TechCrunch, Google’s latest Ads Safety Report shows that the company blocked 8.3 billion ads worldwide in 2025, up sharply from 5.1 billion a year earlier. At the same time, the number of advertiser accounts it suspended went down, even though scam‑related activity remains massive.

Google credits the jump in blocked ads – and the drop in account bans – to deeper use of its Gemini AI models inside enforcement systems. These models now screen ads before they appear and, Google says, automatically catch the vast majority of policy‑breaking creatives before any user sees them.

Scam activity is still a core problem: hundreds of millions of blocked ads and millions of advertiser accounts were tied to fraudulent campaigns. In key markets like the U.S. and India, Google removed or blocked hundreds of millions of ads, but again, did so with fewer outright account suspensions than in 2024. Company executives told reporters this reflects a move from “blunt” account bans toward more granular, AI‑driven enforcement at the level of each individual ad.


Why this matters

On paper, “more bad ads blocked, fewer advertisers banned” sounds like a win–win. In practice, it reveals where Google’s incentives – and AI strategy – really lie.

Who benefits?

  • Google’s revenue stream. Every banned advertiser is lost short‑ and medium‑term revenue. If AI can surgically remove only the offending creatives while letting the advertiser keep spending on compliant campaigns, Google preserves its cashflow while claiming stronger safety.
  • Legitimate advertisers, especially SMEs. Small businesses have long complained about being wrongly suspended with little recourse. If Google truly cut false suspensions by 80%, as it told TechCrunch, that could mean fewer shops suddenly losing their main acquisition channel because an automated system misread one ad.
  • Users – at least partly. Catching over 99% of policy‑violating ads before they go live (Google’s claim) should translate into fewer scam crypto schemes, fake tech support offers and questionable health cures in your feed.

Who loses?

  • Persistent bad actors. AI that can spot patterns across large campaigns makes it harder to run the same scam at scale, even from multiple accounts.
  • But also: edge‑case advertisers. Anyone operating in grey areas – political messaging, sensitive health categories, financial products – now lives under an opaque, AI‑driven filter. Ads may disappear without a human ever seeing them, and explaining why they were rejected becomes harder.

The deeper issue: enforcement has moved from clear‑cut account decisions into a probabilistic, model‑driven layer where most choices are invisible and unreviewed. That’s comfortable for a platform under pressure from regulators and brand‑safety scandals. It’s less comfortable for advertisers and civil society groups who want due process, consistency and transparency.

And it locks in a pattern we’re seeing across Big Tech: the same AI that helps advertisers generate more persuasive campaigns is also the judge, jury and executioner deciding which of those campaigns are allowed to exist.


The bigger picture: AI vs AI in the ad trenches

Google’s shift fits a broader pattern of the ad industry becoming an AI‑native environment.

First, scammers themselves are now using generative AI to create highly localised, grammatically correct and visually convincing ads in seconds. That lowers the cost of experimentation: you can spin up thousands of variants targeting dozens of languages and micro‑audiences with almost no human labour. The result is exactly what Google’s report hints at – an explosion in the number of bad creatives, even if the number of distinct bad actors doesn’t grow at the same rate.

Second, platforms are answering with their own AI escalation. Meta already talks openly about using large‑scale models to scan images, video and text across Facebook and Instagram ads. TikTok leans on AI for content and ad moderation under mounting EU scrutiny. Google now embeds Gemini not only in enforcement, but in campaign creation tools like Performance Max.

That creates a strange symmetry:

  • On one side, AI systems churning out more persuasive – and potentially deceptive – ads.
  • On the other, AI systems acting as automated gatekeepers, blocking most of those ads before humans ever notice.

Historically, Google used more “binary” measures: accounts were shut down when trust was lost, and whole categories or keywords could be swept up in broad crackdowns. That era produced plenty of collateral damage and angry advertisers, but it was at least legible.

The new model is closer to continuous risk management. Instead of asking “Is this advertiser allowed?”, the system constantly asks “What’s the risk of this creative in this context for this user right now?” That’s powerful – and almost impossible for outsiders to audit.

Strategically, this also strengthens Google’s moat. The more its ad platform depends on proprietary, ever‑learning AI models to stay compliant and scam‑free, the harder it becomes for smaller adtech rivals to offer comparable safety at scale. In other words: AI‑driven enforcement is not just a trust measure; it’s a competitive weapon.


The European angle: DSA, GDPR and asymmetric dependency

For European users and companies, this change is not happening in a vacuum. It intersects directly with the EU’s new regulatory architecture.

Under the Digital Services Act (DSA), Google counts as a “Very Large Online Platform” and must assess and mitigate systemic risks – including scams and illegal advertising – and provide more transparency around ad targeting and moderation. An AI‑heavy enforcement model is defensible under the DSA only if Google can show regulators and vetted researchers that decisions are explainable and that biases are managed.

At the same time, the GDPR places limits on purely automated decision‑making that has significant effects on individuals. A small European merchant whose main ad campaigns are silently blocked by AI could argue that there should be a clear explanation and an accessible human review path. National data protection authorities in privacy‑sensitive markets like Germany or France will be watching this closely.

For European advertisers – from Berlin fintechs to Ljubljana and Zagreb startups trying to sell across the EU – this shift cuts both ways:

  • More precise enforcement can mean fewer catastrophic, account‑level bans that wipe out months of optimisation.
  • But reliance on a black‑box AI filter increases operational risk. One model update could tank performance or block creatives in certain languages or cultural contexts, with little recourse beyond appealing into a support void.

Finally, there’s the question of European alternatives. Local adtech players, retail media networks and contextual ad platforms cannot match Google’s AI muscle or global data, but they can differentiate on transparency and human support. If Google’s AI enforcement feels arbitrary or opaque, that opens a small but real opportunity for European ecosystems that promise “slower, but explainable” ad moderation.


Looking ahead: from AI referee to AI policy‑maker

Several trajectories are worth watching over the next 12–24 months.

  1. From enforcement to prevention. Today, Gemini mostly judges ads after they’re created. The logical next step is AI that co‑creates ads while enforcing rules by design: “You can’t even generate a non‑compliant creative with our tools.” That would further lock advertisers into Google’s walled garden, but it could also reduce accidental violations.

  2. Regulatory stress‑tests. EU regulators, armed with the DSA and soon the AI Act, will likely push platforms to open up their enforcement systems to more scrutiny. Expect pressure for clearer documentation on how models are trained, which features they use, and how advertisers can contest decisions. If Google cannot adequately explain its AI policing, it risks fines or mandated changes in Europe.

  3. Scammers get smarter. As detection improves, serious fraudsters will shift to slower, higher‑value attacks: deepfake‑based investment scams, hyper‑targeted phishing that looks like legitimate banking or government messaging, cross‑platform funnels from social to messaging apps. AI can help detect some of this, but the economic incentives are strong enough that the cat‑and‑mouse cycle will intensify.

  4. Collateral damage to sensitive speech. Political, health and financial topics are high‑risk for Google, especially around elections. The temptation will be to over‑block anything remotely ambiguous. Civil society groups in Europe will rightly demand transparency on how many such ads are rejected and on what grounds.

For advertisers and agencies, the pragmatic response is clear: assume that AI‑driven compliance is now part of the media‑buying equation. Budget for creative variants, set up monitoring to spot sudden performance drops that may indicate invisible enforcement changes, and ensure you have internal documentation of all campaigns in case you need to argue with a regulator or a platform.


The bottom line

Google’s pivot from punishing “bad actors” to surgically removing “bad ads” is rational, AI‑enabled and – to an extent – positive for both users and legitimate advertisers. But it also concentrates even more power in an opaque enforcement layer that few outside Google can understand or challenge.

The real question for Europe and beyond is simple: are we comfortable letting a handful of AI systems decide which commercial messages can exist online? If not, the next battle is not just about blocking more bad ads, but about making the AI referees themselves accountable.

Comments

Leave a Comment

No comments yet. Be the first to comment!

Related Articles

Stay Updated

Get the latest AI and tech news delivered to your inbox.