ChatGPT introduces Trusted Contact to support users in crisis
OpenAI has launched Trusted Contact, an optional safety feature for ChatGPT designed to help users who may be at risk of serious self-harm. When the system detects high-concern signals, the feature can notify a person you trust so they can offer timely, real-world support.
The addition expands ChatGPT's safety toolkit by creating a clear pathway from AI assistance to human-led intervention. OpenAI emphasizes that the feature is optional and built with user control in mind: you choose who the trusted contact is, and notifications are only sent under specific, serious circumstances.
Trusted Contact complements ChatGPT's existing guidance and support resources. By linking detection with an actionable human contact, the feature can help connect people to loved ones or professionals who can provide immediate assistance, potentially reducing risk in urgent situations.
Looking ahead: This update reflects a broader trend of integrating AI systems with supportive human networks to improve outcomes in mental-health crises while respecting privacy and user autonomy.