product updateOpenAI

OpenAI adds Trusted Contact feature to alert emergency contacts when ChatGPT detects self-harm discussions

TL;DR

OpenAI launched an optional Trusted Contact feature for ChatGPT that notifies designated emergency contacts when the system detects discussions about self-harm or suicide. The feature requires manual review by trained personnel before sending notifications, and does not share chat transcripts with contacts.

2 min read
0

ChatGPT Adds Emergency Contact Alerts for Self-Harm Detection

OpenAI launched an opt-in safety feature that allows ChatGPT users to designate a "Trusted Contact" who will be notified if automated systems detect discussions about self-harm or suicide with the chatbot.

The feature is available to all adult users globally (18+ or 19+ in South Korea). Users can add contact information for one person through their ChatGPT account settings. The designated contact must accept the invitation within one week.

How the System Works

When OpenAI's automated systems flag a conversation indicating potential self-harm, ChatGPT prompts the user to contact their Trusted Contact. A small team of trained personnel then manually reviews the flagged conversation. If the review confirms serious safety concerns, the system sends a notification via email, text message, or in-app ChatGPT alert.

According to OpenAI, notifications are "intentionally limited" and do not include chat details or transcripts. Either party can remove themselves from the arrangement at any time through account settings.

Background and Context

The feature expands parental controls introduced in September 2024, which followed the suicide of a 16-year-old who had spent months confiding in ChatGPT. OpenAI already provides localized crisis helpline information within ChatGPT responses.

Meta deployed a similar feature on Instagram that alerts parents when teenagers repeatedly search for self-harm content. OpenAI previously faced criticism after reports that ChatGPT responses may have reinforced delusional thinking in some users experiencing mental health crises.

What This Means

This represents a shift in how AI companies handle duty-of-care responsibilities for conversational AI. By inserting human review between automated detection and notification, OpenAI acknowledges that pure algorithmic approaches to crisis intervention carry significant false positive risks. The feature's opt-in nature sidesteps consent issues while addressing concerns that ChatGPT functions as an unmonitored confidant for vulnerable users. However, the effectiveness depends on detection accuracy and whether users at risk will proactively enable the feature.

Related Articles

product update

Apple to let users select third-party AI models in iOS 27 via Extensions system

Apple plans to introduce an Extensions system in iOS 27, iPadOS 27, and macOS 27 that will allow users to select third-party AI models for generative AI tasks. AI companies can opt in through App Store programs to power Apple Intelligence features including Siri, Writing Tools, and Image Playground.

changelog

OpenAI releases GPT-5.5 Instant as default ChatGPT model with 52.5% fewer hallucinations

OpenAI released GPT-5.5 Instant as the new default ChatGPT model on May 5, 2026. The company claims the update produces 52.5% fewer hallucinations on high-stakes prompts and 37.3% fewer inaccurate claims on challenging conversations compared to GPT-5.3 Instant.

model release

OpenAI releases GPT-Realtime-2 reasoning voice model with two specialized variants for translation and transcription

OpenAI has released three new realtime voice models through its Realtime API: GPT-Realtime-2 with GPT-5-class reasoning capabilities, GPT-Realtime-Translate supporting 70 input languages, and GPT-Realtime-Whisper for streaming transcription. The models are priced at $32-64 per 1M audio tokens for GPT-Realtime-2, and $0.017-0.034 per minute for the specialized variants.

product update

Apple iOS 27 to let users choose between Google, Anthropic, and other AI models system-wide

Apple will allow iPhone users to select from multiple third-party AI models to power system features in iOS 27, launching later this year. The feature, called 'Extensions,' will integrate models from Google and Anthropic into Siri, Writing Tools, Image Playground, and other Apple Intelligence features.

Comments

Loading...