HealthcareThursday, May 7, 2026· 2 min read

ChatGPT’s 'Trusted Contact' Links Users to Loved Ones in Crisis

Source: The Verge AI

TL;DR

OpenAI is rolling out an optional "Trusted Contact" safety feature in ChatGPT that lets adult users designate a friend, family member, or caregiver to be notified if the system detects potential self-harm or suicidal talk. The feature aims to connect people in crisis with trusted supports alongside existing helplines, providing a practical, expert-backed safety layer.

Key Takeaways

  • 1Users can opt in and assign a Trusted Contact who will be alerted if ChatGPT detects safety concerns like self-harm or suicidal ideation.
  • 2OpenAI frames the feature around an expert-validated idea: connecting someone in crisis with a known support can make a meaningful difference.
  • 3Trusted Contact complements localized helplines and other safety resources rather than replacing them.
  • 4The feature is optional and limited to adult users, balancing safety benefits with user control and privacy.

ChatGPT adds an opt-in safety layer to connect users with loved ones

OpenAI has launched a new optional "Trusted Contact" feature for ChatGPT that allows adult users to designate a friend, family member, or caregiver to be alerted if the model detects conversations suggesting self-harm or suicidal thoughts. The tool is designed to offer a human connection when someone may be at risk, giving people an extra pathway to support beyond automated responses and crisis hotlines.

Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference. The notification system is optional and intended to complement — not replace — localized helplines and other emergency resources already available through the chatbot.

The rollout emphasizes user choice and privacy: only adult users can set a Trusted Contact, and alerts are triggered by the model's safety detection rather than sent indiscriminately. For many users, this creates a pragmatic bridge between AI-driven detection and real-world support from people who know them.

By adding a way to loop in trusted loved ones during moments of crisis, OpenAI is taking a constructive step toward making AI assistants safer and more socially helpful. If widely adopted, the feature could provide timely help for people who otherwise might not reach out, reinforcing human connection as a core part of digital safety workflows.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.