nvidia

19 articles tagged with nvidia

March 24, 2026
model releaseStability AI

Stability AI and NVIDIA launch Stable Diffusion 3.5 NIM for faster image generation

Stability AI and NVIDIA have launched Stable Diffusion 3.5 NIM, a microservice designed to accelerate image generation performance and simplify enterprise deployment. The collaboration packages Stable Diffusion 3.5 as an NVIDIA NIM (NVIDIA Inference Microservice) for optimized inference.

changelogStability AI

Stable Diffusion 3.5 TensorRT optimization delivers 2x faster generation, 40% less VRAM on RTX GPUs

Stability AI has released TensorRT-optimized versions of the Stable Diffusion 3.5 model family in collaboration with NVIDIA. The optimization uses FP8 quantization to achieve 2x faster generation speed and 40% lower VRAM requirements on supported RTX GPUs.

March 23, 2026
model releaseNVIDIA+1

Nvidia releases Nemotron 3 Super: 120B MoE model with 1M token context

Nvidia has released Nemotron 3 Super, a 120-billion parameter hybrid Mamba-Transformer Mixture-of-Experts model that activates only 12 billion parameters during inference. The open-weight model features a 1-million token context window, multi-token prediction capabilities, and pricing at $0.10 per million input tokens and $0.50 per million output tokens.

product updateNVIDIA

NVIDIA Nemotron 3 Super now available on Amazon Bedrock with 256K context window

NVIDIA Nemotron 3 Super, a hybrid Mixture of Experts model with 120B parameters and 12B active parameters, is now available as a fully managed model on Amazon Bedrock. The model supports up to 256K token context length and claims 5x higher throughput efficiency over the previous Nemotron Super and 2x higher accuracy on reasoning tasks.

model releaseNVIDIA

NVIDIA releases Nemotron 3 Content Safety 4B for multimodal, multilingual moderation

NVIDIA released Nemotron 3 Content Safety 4B, an open-source multimodal safety model designed to moderate content across text, images, and multiple languages. Built on Gemma-3 4B-IT with a 128K context window, the model achieved 84% average accuracy on multimodal safety benchmarks and supports over 140 languages through culturally-aware training data.

March 14, 2026
fundingNVIDIA

Nvidia to spend $26B on open-weight AI models, filing reveals

Nvidia will invest $26 billion over the next five years to build open-weight AI models, according to a 2025 financial filing confirmed by executives. The move signals a strategic shift from chipmaker to AI frontier lab, with the company releasing Nemotron 3 Super (128B parameters) and claiming it outperforms GPT-OSS on multiple benchmarks.

March 12, 2026
product updateNVIDIA

Nvidia to spend $26B on open-weight AI models, targeting Chinese competition and developer lock-in

An SEC filing reveals Nvidia plans to spend $26 billion on open-weight AI models over the next five years. The investment targets the open-source gap left by OpenAI, Meta, and Anthropic while countering the rise of Chinese open-source models and deepening developer dependence on Nvidia hardware.

product update

Meta unveils four custom AI inference chips to cut costs and reduce Nvidia dependency

Meta has unveiled four generations of custom-designed AI chips focused on inference workloads, aiming to reduce inference costs across its platforms serving billions of users. The move represents a significant step toward reducing Meta's dependence on GPU manufacturers like Nvidia and AMD.

model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-NVFP4, a 120-billion parameter text generation model featuring a latent Mixture-of-Experts (MoE) architecture. The model supports 8 languages including English, French, Spanish, Italian, German, Japanese, and Chinese, and is available on Hugging Face with 8-bit quantization support through NVIDIA's ModelOpt toolkit.

March 11, 2026
model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-BF16, a 120 billion parameter model designed for text generation and conversational tasks. The model employs a latent mixture-of-experts (MoE) architecture and supports multiple languages including English, French, Spanish, Italian, German, Japanese, and Chinese.

March 10, 2026
product updateNVIDIA

Nvidia partners with Mira Murati's Thinking Machines Lab in long-term deal

Nvidia and Thinking Machines Lab, founded by former OpenAI executive Mira Murati, have announced a long-term partnership. Details on the scope and terms of the collaboration remain limited.

funding

Thinking Machines Lab secures Nvidia compute deal with 1+ gigawatt power allocation

Thinking Machines Lab has secured a multi-year compute deal with Nvidia involving at least 1 gigawatt of processing power, according to the company. The agreement also includes a strategic investment from Nvidia, marking a significant infrastructure commitment for the AI research organization.

March 9, 2026
product updateNVIDIA

Nvidia planning open-source AI agent platform ahead of developer conference

Nvidia is preparing to launch an open-source AI agent platform, according to reports ahead of the company's annual developer conference. The move mirrors approaches by competitors like OpenAI in building agent-based AI systems.

product updateNVIDIA

NVIDIA Nemotron 3 Nano now available on Amazon Bedrock as serverless model

Amazon Bedrock now offers NVIDIA's Nemotron 3 Nano as a fully managed serverless model, expanding its Nemotron portfolio alongside previously available Nemotron 2 Nano 9B and Nemotron 2 Nano VL 12B variants. The addition enables developers to deploy NVIDIA's smallest inference-optimized model without managing infrastructure.

funding

Nvidia-backed Nscale raises $2B, hits $14.6B valuation with Sandberg and Clegg joining board

Nvidia-backed British AI infrastructure startup Nscale has raised $2 billion in a new funding round, bringing its valuation to $14.6 billion. The round marks a significant milestone for the infrastructure-focused startup, with Meta's former COO Sheryl Sandberg and Meta's former VP of Global Affairs Nick Clegg joining the board.

March 2, 2026
product updateNVIDIA

Nvidia invests $4 billion in photonics companies Lumentum and Coherent

Nvidia announced Monday it is investing $2 billion each into photonics companies Lumentum and Coherent to develop optical transceivers, circuit switches, and lasers for next-generation AI data centers. The technology aims to improve energy efficiency, data transfer speeds, and bandwidth in data center infrastructure.

February 27, 2026
product update

Meta signs multi-billion dollar TPU rental deal with Google, challenging Nvidia's chip dominance

Meta has signed a multi-billion dollar deal to rent Google's TPU (Tensor Processing Unit) chips for training its AI models, marking a significant shift away from Nvidia's dominance in AI infrastructure. The arrangement provides Meta with alternative compute capacity while signaling growing competition in the specialized AI chip market.

fundingOpenAI

OpenAI closes $110B funding round from Amazon, Nvidia, SoftBank at $730B valuation

OpenAI has closed a $110 billion funding round with Amazon committing $50 billion, Nvidia $30 billion, and SoftBank $30 billion. The company is now valued at $730 billion, following a previous $40 billion round in 2025. The funding includes custom model development agreements between OpenAI and Amazon Web Services.

February 20, 2026
fundingNVIDIA

Nvidia reportedly planning $30 billion investment in OpenAI

Nvidia is reportedly planning a $30 billion investment in OpenAI, according to Reuters citing sources familiar with the matter. The deal would represent one of the largest funding commitments in the AI sector to date. Terms and timeline have not been officially confirmed by either company.