Meta turns to AI to bolster protections for young users
Meta has begun deploying a visual-analysis system that uses artificial intelligence to estimate whether an account holder is underage by examining height and bone-structure cues. The company says the technology is active in select countries and that it is working toward a broader rollout, signaling a move to apply computer vision to age verification at scale.
Safer experiences for minors
By automatically flagging accounts likely to belong to underage users, Meta aims to strengthen enforcement of age-restricted features and policies. When combined with other verification and moderation tools, the AI could help reduce underage access to inappropriate content and services, supporting a safer online environment for children and teens.
Operational rollout and next steps
- The system is currently live in select regions as part of an incremental deployment.
- Meta has indicated plans for broader availability, which could bring the technology to many more users worldwide.
- Real-world operation represents a notable application of vision-based AI for policy enforcement and user safety.
As this technology expands, it will be important for platforms, regulators, and communities to track outcomes and ensure the tool is used responsibly to protect young people while respecting user rights. Nevertheless, the move highlights how AI can be applied to pressing safety challenges on major social platforms.