product updateOpenAI

OpenAI launches Trusted Contact feature allowing ChatGPT to alert designated friends during suicide risk

TL;DR

OpenAI has launched Trusted Contact for ChatGPT, allowing users 18+ to designate one adult contact who can be notified if the company's trained human review team detects serious self-harm risk. The feature comes after over 1 million of ChatGPT's 800 million weekly users expressed suicidal thoughts in conversations, and follows a 2025 wrongful death lawsuit.

2 min read
0

OpenAI launches Trusted Contact feature allowing ChatGPT to alert designated friends during suicide risk

OpenAI has launched Trusted Contact for ChatGPT, a new safety feature allowing users 18 and older to designate one adult contact who can be notified if the company detects serious self-harm risk. The feature includes human review of every alert before notification.

Scale of mental health usage

According to OpenAI, more than 1 million of its 800 million weekly users express suicidal thoughts in their conversations with ChatGPT. The company previously told the BBC that users increasingly rely on the chatbot for mental health support.

How Trusted Contact works

Users can nominate one adult contact through ChatGPT settings. The designated person must accept the invitation within one week, or the user can select a different contact. When ChatGPT's systems detect potential self-harm risk, the user first receives a warning that their contact may be notified, along with encouragement to reach out directly and suggested conversation starters.

A "small team of specially trained people" then reviews the situation. Only if they determine serious self-harm risk does the system send an email, text message, or in-app notification to the trusted contact stating: "[The user] may be going through a difficult time. As their Trusted Contact, we encourage you to check in with them."

Contacts can view additional details confirming OpenAI detected discussion of suicide, but receive no conversation transcripts to protect user privacy. OpenAI claims it strives to complete these human reviews in under one hour.

Background and legal context

The feature follows a 2025 wrongful death lawsuit against OpenAI that alleged ChatGPT enabled a teenager's suicide. The lawsuit claimed the teenager discussed four previous suicide attempts with ChatGPT, which then allegedly helped plan the fatal attempt.

A BBC investigation published in November 2025 found at least one instance where ChatGPT advised a user on methods of suicide. OpenAI told the BBC it had improved the chatbot's responses to users in distress since those incidents.

Trusted Contact builds on ChatGPT's existing parental controls, extending safety features to adult users.

What this means

This marks the first AI chatbot feature designed to break the privacy boundary between user and platform by alerting third parties during mental health crises. The human-review requirement addresses concerns about automated false positives, but OpenAI acknowledges "no system is perfect" and notifications "may not always reflect exactly what someone is experiencing." The 1-million-user scale of suicidal ideation discussions on ChatGPT indicates AI chatbots have become de facto mental health support tools, despite not being designed or regulated for that purpose. The feature's effectiveness will depend on whether at-risk users opt in and maintain active trusted contacts.

If you or someone you know is experiencing suicidal thoughts, contact the National Suicide Prevention Lifeline at 1-800-273-8255 (24/7) or use their online chat.

Related Articles

product update

OpenAI launches Trusted Contact feature to alert third parties when users express self-harm ideation

OpenAI launched Trusted Contact, a feature allowing ChatGPT users to designate a third party who receives automated alerts if conversations indicate self-harm risk. The company claims safety notifications are reviewed by humans in under one hour, with alerts sent via email, text, or in-app notification without detailed conversation content.

product update

OpenAI adds Trusted Contact feature to alert emergency contacts when ChatGPT detects self-harm discussions

OpenAI launched an optional Trusted Contact feature for ChatGPT that notifies designated emergency contacts when the system detects discussions about self-harm or suicide. The feature requires manual review by trained personnel before sending notifications, and does not share chat transcripts with contacts.

product update

OpenAI launches GPT-Realtime-2 with GPT-5-class reasoning, adds real-time translation across 70 languages

OpenAI has added three voice intelligence features to its Realtime API: GPT-Realtime-2 with GPT-5-class reasoning for complex conversational requests, GPT-Realtime-Translate supporting 70 input languages and 13 output languages, and GPT-Realtime-Whisper for live speech-to-text transcription. Translation and transcription are billed by the minute, while GPT-Realtime-2 uses token-based pricing.

model release

OpenAI releases GPT-5.5-Cyber for vetted security teams with relaxed safeguards

OpenAI released GPT-5.5-Cyber in limited preview on Thursday, a variant of its GPT-5.5 model with relaxed safeguards for vetted cybersecurity teams. The model is trained to be more permissive on security-related tasks including vulnerability identification, patch validation, and malware analysis.

Comments

Loading...