Google leans on AI to target bad ads, not just bad actors
TechCrunch reports that Google blocked 8.3 billion ads in 2025, a striking illustration of how automated systems are powering enforcement at internet scale. At the same time, the company suspended fewer advertisers, signaling a strategic shift toward more precise, ad-level action rather than broad punitive measures against entire advertisers.
This change reflects the promise of modern machine learning: detecting and removing harmful or policy-violating creative quickly while minimizing collateral impact on legitimate businesses. By isolating problematic ads rather than suspending accounts wholesale, platforms can keep the ecosystem healthier for users and honest advertisers alike.
Benefits of the shift include faster removal of dangerous or deceptive content, fewer wrongful suspensions, and a more efficient use of human review resources for edge cases. These improvements make ad platforms safer and fairer, and help preserve the livelihoods of small and medium advertisers who might otherwise be affected by blunt enforcement.
While details and long-term outcomes will unfold, the 8.3 billion blocked-ads figure is an encouraging sign that AI can be applied to platform governance in ways that scale protection and reduce unintended harm. If sustained, this approach could set a positive precedent for other online services balancing user safety and commercial fairness.
- Mass-scale ad blocking shows enforcement capability.
- Targeted removals reduce overreach against legitimate advertisers.
- AI-driven enforcement can free up human reviewers for complex cases.