fundingNVIDIA

Nvidia to spend $26B on open-weight AI models, filing reveals

Nvidia will invest $26 billion over the next five years to build open-weight AI models, according to a 2025 financial filing confirmed by executives. The move signals a strategic shift from chipmaker to AI frontier lab, with the company releasing Nemotron 3 Super (128B parameters) and claiming it outperforms GPT-OSS on multiple benchmarks.

3 min read

Nvidia to Spend $26 Billion on Open-Weight AI Models

Nvidia will invest $26 billion over the next five years to develop open-source artificial intelligence models, according to a 2025 financial filing. The company's executives confirmed the investment in interviews with WIRED, marking a significant strategic pivot toward competing directly with frontier labs like OpenAI and DeepSeek.

What This Means for Nvidia's Strategy

The investment signals Nvidia's evolution from a chipmaker into a full AI research organization. By releasing open-weight models—where parameters and architectural details are publicly available—Nvidia gains several advantages: the models are optimized for Nvidia hardware, creating a built-in market advantage, while the company simultaneously addresses competition from Chinese open models gaining adoption among startups and researchers globally.

This approach differs sharply from OpenAI and Anthropic, which restrict their best models to cloud access. Meta, which pioneered open models with Llama in 2023, has signaled it may not continue releasing future models openly. Meanwhile, Chinese companies including DeepSeek, Alibaba, Moonshot AI, and MiniMax have released powerful open models freely, attracting Western developers and researchers.

Nemotron 3 Super Details

Nvidia released Nemotron 3 Super on Wednesday, its most capable open-weight model to date. The model features 128 billion parameters, comparable in scale to OpenAI's GPT-OSS but claims superior performance across benchmarks.

Benchmark performance:

  • AI Index score: 37 (GPT-OSS scored 33)
  • PinchBench rank: #1 (a new benchmark measuring model control of OpenClaw)
  • Nvidia has also completed pretraining of a 550-billion-parameter model

The company implemented several technical innovations in Nemotron 3 to improve reasoning abilities, long-context handling, and responsiveness to reinforcement learning.

Strategic Context

Nvidia's hardware remains the gold standard for training large language models, with customers spending billions on the company's chips. However, the rise of efficient open models from China—particularly DeepSeek's January 2025 release demonstrating cheaper training methods—threatens Nvidia's dominance. DeepSeek's rumored next model may have been trained exclusively on sanctioned Huawei chips, potentially shifting developer preference.

By providing a robust, well-maintained open alternative tuned to Nvidia hardware, the company aims to retain ecosystem loyalty while positioning itself as supporting "openness" in AI development.

Nvidia VP Bryan Catanzaro stated: "It's in our interest to help the ecosystem develop." Kari Briski, VP of generative AI software, added that open models help Nvidia test and refine its datacenter infrastructure and hardware roadmaps.

Industry Response

AI researchers including Nathan Lambert (Allen Institute for AI) and Andy Konwinski (Laude Institute) view Nvidia's investment as strategically significant. Konwinski noted Nvidia's unique position "at the nexus of AI research" makes this commitment to openness "an unprecedented signal."

Industry observers warn that if open innovation continues shifting to China, it could disadvantage the US long-term. Nvidia frames its investment as providing an American alternative to Chinese open models while maintaining global ecosystem diversity.

What This Means

Nvidia is making a calculated bet: by dominating open-weight model development, it locks in hardware dependence while defending against Chinese model adoption. This converts Nvidia's chip advantage into a research advantage. Success requires Nemotron models matching or exceeding Chinese alternatives in capability and usability—a challenge given Alibaba's Qwen and DeepSeek's momentum with Western developers. The $26B commitment signals Nvidia sees this as essential infrastructure defense.

Nvidia $26 Billion Open-Weight AI Models Investment | TPS