ResearchTuesday, May 5, 2026· 2 min read

Meta Deploys AI to Help Protect Minors by Estimating Age from Visual Cues

TL;DR

Meta has begun using an AI visual-analysis system that estimates whether users are underage by analyzing height and bone-structure cues. The system is live in select countries now, with Meta planning a broader rollout to help enforce age safeguards and create safer experiences for young people.

Key Takeaways

  • 1Meta's new visual-analysis AI estimates if an account holder is underage by examining height and bone-structure indicators.
  • 2The system is already operating in select countries, with a wider rollout planned.
  • 3This technology aims to improve enforcement of age-related policies and reduce underage access to age-restricted features.
  • 4Real-world deployment marks a step forward in applying AI to online safety at scale.

Meta turns to AI to bolster protections for young users

Meta has begun deploying a visual-analysis system that uses artificial intelligence to estimate whether an account holder is underage by examining height and bone-structure cues. The company says the technology is active in select countries and that it is working toward a broader rollout, signaling a move to apply computer vision to age verification at scale.

Safer experiences for minors

By automatically flagging accounts likely to belong to underage users, Meta aims to strengthen enforcement of age-restricted features and policies. When combined with other verification and moderation tools, the AI could help reduce underage access to inappropriate content and services, supporting a safer online environment for children and teens.

Operational rollout and next steps

  • The system is currently live in select regions as part of an incremental deployment.
  • Meta has indicated plans for broader availability, which could bring the technology to many more users worldwide.
  • Real-world operation represents a notable application of vision-based AI for policy enforcement and user safety.

As this technology expands, it will be important for platforms, regulators, and communities to track outcomes and ensure the tool is used responsibly to protect young people while respecting user rights. Nevertheless, the move highlights how AI can be applied to pressing safety challenges on major social platforms.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.