OpenAI expands ChatGPT safety with a new Trusted Contact safeguard
OpenAI announced a new "Trusted Contact" safeguard designed to strengthen protections for ChatGPT users if conversations indicate possible self-harm. The measure expands the company’s safety toolkit by enabling a route for trusted, human help to be engaged alongside the model’s existing crisis guidance.
The feature is intended to let users identify a trusted person who can be contacted or brought into the loop when the system detects signs of acute risk. By combining AI detection with a human support channel, the safeguard aims to connect people to real-world help more quickly while preserving the conversational support that many users already get from ChatGPT.
Privacy and sensible use are central to the rollout: the feature is framed as an addition to—not a replacement for—professional crisis services and emergency responders. It complements in-chat resources and referrals by providing an option to escalate to a designated supporter when appropriate, helping ensure vulnerable users have multiple pathways to help.
Beyond immediate user benefit, the Trusted Contact safeguard signals a positive industry trend: AI products can embed responsible escalation paths that combine automated detection with human-centered support. If widely adopted, this approach could meaningfully improve outcomes for people in crisis by facilitating faster, compassionate intervention.