product updateNVIDIA

Nvidia partners with Mira Murati's Thinking Machines Lab in long-term deal

TL;DR

Nvidia and Thinking Machines Lab, founded by former OpenAI executive Mira Murati, have announced a long-term partnership. Details on the scope and terms of the collaboration remain limited.

1 min read
0

Nvidia Partners with Mira Murati's Thinking Machines Lab

Nvidia has announced a long-term partnership with Thinking Machines Lab, the AI startup founded by former OpenAI executive Mira Murati.

Thinking Machines Lab, founded after Murati's departure from OpenAI in September 2024, represents her first major public venture following her role as Chief Technology Officer at the San Francisco-based AI company.

Partnership Details

Specific terms of the partnership have not been disclosed. The collaboration marks a significant alignment between Nvidia—the dominant supplier of AI accelerators and GPUs—and an emerging AI research and development organization led by one of the industry's prominent figures.

Murati previously held key technical leadership roles at OpenAI, where she oversaw product development and technical strategy. Her departure signaled her intent to pursue independent AI research and development work.

Strategic Context

The partnership occurs as Nvidia deepens its ecosystem of collaborations with AI companies and research organizations. Nvidia's GPUs and AI infrastructure remain foundational to most large-scale AI development, giving the company leverage in partnership negotiations.

Thinking Machines Lab's exact research focus and commercial objectives have not been publicly detailed. The lab's name suggests emphasis on reasoning and inference capabilities in AI systems—areas of active development across the broader AI industry.

What This Means

This partnership signals continued confidence in Murati as a technical leader in AI, even as she operates independently from established organizations. For Nvidia, the deal extends its reach into emerging AI research efforts. The lack of disclosed financial terms or specific technical commitments leaves the partnership's actual scope unclear—it may range from infrastructure access to collaborative research to equity investment. Broader impact will depend on what Thinking Machines Lab ultimately builds and whether the partnership produces public research or commercial products.

Related Articles

product update

OpenAI launches Trusted Contact feature allowing ChatGPT to alert designated friends during suicide risk

OpenAI has launched Trusted Contact for ChatGPT, allowing users 18+ to designate one adult contact who can be notified if the company's trained human review team detects serious self-harm risk. The feature comes after over 1 million of ChatGPT's 800 million weekly users expressed suicidal thoughts in conversations, and follows a 2025 wrongful death lawsuit.

product update

OpenAI launches GPT-Realtime-2 with GPT-5-class reasoning, adds real-time translation across 70 languages

OpenAI has added three voice intelligence features to its Realtime API: GPT-Realtime-2 with GPT-5-class reasoning for complex conversational requests, GPT-Realtime-Translate supporting 70 input languages and 13 output languages, and GPT-Realtime-Whisper for live speech-to-text transcription. Translation and transcription are billed by the minute, while GPT-Realtime-2 uses token-based pricing.

product update

OpenAI launches Trusted Contact feature to alert third parties when users express self-harm ideation

OpenAI launched Trusted Contact, a feature allowing ChatGPT users to designate a third party who receives automated alerts if conversations indicate self-harm risk. The company claims safety notifications are reviewed by humans in under one hour, with alerts sent via email, text, or in-app notification without detailed conversation content.

product update

OpenAI adds Trusted Contact feature to alert emergency contacts when ChatGPT detects self-harm discussions

OpenAI launched an optional Trusted Contact feature for ChatGPT that notifies designated emergency contacts when the system detects discussions about self-harm or suicide. The feature requires manual review by trained personnel before sending notifications, and does not share chat transcripts with contacts.

Comments

Loading...