ChatGPT adds an opt-in safety layer to connect users with loved ones
OpenAI has launched a new optional "Trusted Contact" feature for ChatGPT that allows adult users to designate a friend, family member, or caregiver to be alerted if the model detects conversations suggesting self-harm or suicidal thoughts. The tool is designed to offer a human connection when someone may be at risk, giving people an extra pathway to support beyond automated responses and crisis hotlines.
Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference. The notification system is optional and intended to complement — not replace — localized helplines and other emergency resources already available through the chatbot.
The rollout emphasizes user choice and privacy: only adult users can set a Trusted Contact, and alerts are triggered by the model's safety detection rather than sent indiscriminately. For many users, this creates a pragmatic bridge between AI-driven detection and real-world support from people who know them.
By adding a way to loop in trusted loved ones during moments of crisis, OpenAI is taking a constructive step toward making AI assistants safer and more socially helpful. If widely adopted, the feature could provide timely help for people who otherwise might not reach out, reinforcing human connection as a core part of digital safety workflows.