HealthcareFriday, May 8, 2026· 2 min read

OpenAI Launches ‘Trusted Contact’ Safeguard to Support Users at Risk of Self‑Harm

TL;DR

OpenAI has introduced a new "Trusted Contact" safeguard to expand protections for ChatGPT users when conversations indicate possible self-harm. The feature is designed to connect at-risk users with a pre-designated contact and complements existing crisis resources and safety tooling.

Key Takeaways

  • 1New 'Trusted Contact' feature aims to provide an extra layer of support when conversations suggest possible self-harm.
  • 2The safeguard lets users designate someone who can be alerted or engaged to help in high-risk situations.
  • 3This builds on OpenAI's existing safety tools and crisis resource guidance for ChatGPT users.
  • 4If broadly adopted, the feature could help reach vulnerable users sooner and encourage timely, compassionate support.

OpenAI expands ChatGPT safety with a new Trusted Contact safeguard

OpenAI announced a new "Trusted Contact" safeguard designed to strengthen protections for ChatGPT users if conversations indicate possible self-harm. The measure expands the company’s safety toolkit by enabling a route for trusted, human help to be engaged alongside the model’s existing crisis guidance.

The feature is intended to let users identify a trusted person who can be contacted or brought into the loop when the system detects signs of acute risk. By combining AI detection with a human support channel, the safeguard aims to connect people to real-world help more quickly while preserving the conversational support that many users already get from ChatGPT.

Privacy and sensible use are central to the rollout: the feature is framed as an addition to—not a replacement for—professional crisis services and emergency responders. It complements in-chat resources and referrals by providing an option to escalate to a designated supporter when appropriate, helping ensure vulnerable users have multiple pathways to help.

Beyond immediate user benefit, the Trusted Contact safeguard signals a positive industry trend: AI products can embed responsible escalation paths that combine automated detection with human-centered support. If widely adopted, this approach could meaningfully improve outcomes for people in crisis by facilitating faster, compassionate intervention.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.