model-release

34 articles tagged with model-release

March 24, 2026

Google DeepMind's Gemini 3.1 Flash-Lite generates websites in real time, 2.5x faster than predecessor

Google DeepMind released Gemini 3.1 Flash-Lite, a model that generates functional websites in real time through a new pseudo-browser demo. The model achieves first response token 2.5 times faster than Gemini 2.5 Flash and outputs over 360 tokens per second, though output pricing has tripled from $0.40 to $1.50 per million tokens.

model releaseStability AI

Stability AI releases Stable Audio 2.5 for enterprise sound production

Stability AI released Stable Audio 2.5, positioned as the first audio generation model built specifically for enterprise sound production. The model introduces improvements in quality and control for creating dynamic compositions adaptable to custom brand needs.

model releaseStability AI

Stable Video 4D 2.0 generates 4D assets from single videos with improved quality

Stability AI has released Stable Video 4D 2.0 (SV4D 2.0), an upgraded version of its multi-view video diffusion model designed to generate 4D assets from single object-centric videos. The update claims to deliver higher-quality outputs on real-world video footage.

March 23, 2026
model releaseNVIDIA+1

NVIDIA releases Nemotron-3-Nano-4B, a 4B parameter model for edge AI with 262K context window

NVIDIA released Nemotron-3-Nano-4B-GGUF on March 16, 2026, a 4-billion parameter small language model (SLM) designed for edge deployment on devices like Jetson Thor and GeForce RTX. The model features a hybrid Mamba-2 and Transformer architecture with a 262K token context window and supports both reasoning and non-reasoning modes via system prompts.

model releaseNVIDIA+1

Nvidia releases Nemotron 3 Super: 120B MoE model with 1M token context

Nvidia has released Nemotron 3 Super, a 120-billion parameter hybrid Mamba-Transformer Mixture-of-Experts model that activates only 12 billion parameters during inference. The open-weight model features a 1-million token context window, multi-token prediction capabilities, and pricing at $0.10 per million input tokens and $0.50 per million output tokens.

model releaseXiaomi

Xiaomi releases MiMo-V2-Pro with 1M context window and 1T+ parameters

Xiaomi released MiMo-V2-Pro on March 18, 2026, a flagship foundation model with over 1 trillion total parameters and a 1,048,576 token context window. The model is priced at $1 per million input tokens and $3 per million output tokens, positioning it as an agent-focused system comparable to top-tier models.

product updateNVIDIA

NVIDIA Nemotron 3 Super now available on Amazon Bedrock with 256K context window

NVIDIA Nemotron 3 Super, a hybrid Mixture of Experts model with 120B parameters and 12B active parameters, is now available as a fully managed model on Amazon Bedrock. The model supports up to 256K token context length and claims 5x higher throughput efficiency over the previous Nemotron Super and 2x higher accuracy on reasoning tasks.

model release

MiniMax M2.7 used autonomous loops to optimize its own training process

MiniMax released M2.7, a model that autonomously participated in its own development through self-optimization loops. The model ran over 100 optimization rounds on internal coding tasks, achieving a 30% performance boost, and scored 66.6% on OpenAI's MLE Bench Lite—competitive with Gemini 3.1 Pro and GPT-5.4.

March 19, 2026
model release

Cursor releases Composer 2 at $0.50/$2.50 per 1M tokens, undercutting Claude and GPT-4 on pricing

Cursor released Composer 2, a code-specialized model priced at $0.50 per million input tokens and $2.50 per million output tokens—roughly 90% cheaper than Claude Opus 4.6 ($5.00/$25.00) and 60% cheaper than GPT-5.4 ($2.50/$15.00). The model scores 61.3 on Cursor's internal CursorBench, competitive with Claude Opus 4.6 (58.2) but below GPT-5.4 Thinking (63.9).

March 18, 2026
model releaseOpenAI

OpenAI releases GPT-4o mini with 128K context at $0.15/$0.60 per 1M tokens

OpenAI released GPT-4o mini on July 18, 2024, a compact multimodal model with 128,000 token context window priced at $0.15 per million input tokens and $0.60 per million output tokens. The model achieves 82% on MMLU and claims to rank higher than GPT-4 on chat preference leaderboards while costing 60% less than GPT-3.5 Turbo.

March 17, 2026
model releaseOpenAI

OpenAI releases GPT-5.4 mini and nano with 3-4x price increases but major performance gains

OpenAI has released GPT-5.4 mini and GPT-5.4 nano, compact models optimized for coding and subagent tasks. The new models deliver significant performance improvements—GPT-5.4 mini reaches 54.4% on SWE-Bench Pro versus 45.7% for GPT-5 mini—but cost 3-4x more per input token than their predecessors.

model releaseOpenAI

OpenAI's GPT-5.4 mini now available in GitHub Copilot

OpenAI has released GPT-5.4 mini, the lightweight variant of its agentic coding model GPT-5.4, in GitHub Copilot. The model represents OpenAI's highest-performing mini offering to date for code generation and completion tasks.

model release

Mistral AI releases Mistral Small 4, claims improved performance on reasoning tasks

Mistral AI has released Mistral Small 4, the latest iteration of its small-scale language model. The company claims improvements in reasoning and coding capabilities, though specific benchmark scores and pricing details have not been publicly disclosed.

March 14, 2026
fundingNVIDIA

Nvidia to spend $26B on open-weight AI models, filing reveals

Nvidia will invest $26 billion over the next five years to build open-weight AI models, according to a 2025 financial filing confirmed by executives. The move signals a strategic shift from chipmaker to AI frontier lab, with the company releasing Nemotron 3 Super (128B parameters) and claiming it outperforms GPT-OSS on multiple benchmarks.

model release

Hume AI open-sources TADA: speech model 5x faster than rivals with zero hallucination

Hume AI has open-sourced TADA, a speech generation model that maps exactly one audio signal to each text token, achieving 5x faster processing than comparable systems. The model produced zero transcription hallucinations across 1,000+ test samples and runs on smartphones, available in 1B and 3B parameter versions under MIT license.

March 12, 2026
model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-NVFP4, a 120-billion parameter text generation model featuring a latent Mixture-of-Experts (MoE) architecture. The model supports 8 languages including English, French, Spanish, Italian, German, Japanese, and Chinese, and is available on Hugging Face with 8-bit quantization support through NVIDIA's ModelOpt toolkit.

March 11, 2026
model releaseNVIDIA

NVIDIA releases Nemotron-3-Super-120B, a 120B parameter model with latent MoE architecture

NVIDIA has released Nemotron-3-Super-120B-A12B-BF16, a 120 billion parameter model designed for text generation and conversational tasks. The model employs a latent mixture-of-experts (MoE) architecture and supports multiple languages including English, French, Spanish, Italian, German, Japanese, and Chinese.

model release

Google's Gemini Embedding 2 unifies text, image, video, and audio in single vector space

Google has released Gemini Embedding 2, its first native multimodal embedding model that represents text, images, video, audio, and documents in a unified vector space. The model eliminates the need for separate embedding models across different modalities in AI pipelines.

March 9, 2026
model release

IBM releases Granite 4.0 1B Speech: multilingual model for edge devices

IBM has released Granite 4.0 1B Speech, a 1 billion parameter multilingual speech model designed for edge deployment. The model supports multiple languages and is optimized for devices with limited computational resources.

March 5, 2026
product updateOpenAI

OpenAI Python SDK v2.25.0 adds GPT-5.4 support with new tool search and computer control features

OpenAI has released version 2.25.0 of its Python SDK, adding support for GPT-5.4 and introducing a new tool search feature alongside a computer control tool for agent-based automation. The update, released March 5, 2026, also includes API schema refinements and parameter changes to the prompt cache and message handling.

model releaseOpenAI

OpenAI launches GPT-5.4 with native computer use capabilities for autonomous agents

OpenAI has launched GPT-5.4, its latest model with native computer use capabilities that allow it to operate computers and complete tasks across applications. The release represents a step toward autonomous AI agents that can handle complex jobs independently. The model includes advancements in reasoning, coding, and professional work with spreadsheets, documents, and presentations.

model releaseOpenAI

OpenAI releases GPT-5.4 with Pro and Thinking variants for professional use

OpenAI has launched GPT-5.4, which the company describes as its most capable and efficient frontier model for professional work. The release includes Pro and Thinking variants, though specific technical specifications and pricing remain unclear.

model release

Step-3.5-Flash-Base: StepFun releases lightweight text generation model

StepFun has released Step-3.5-Flash-Base, a text generation model available on Hugging Face under Apache 2.0 license. The model is part of the Step 3.5 series and focuses on efficient inference.

March 3, 2026
model release

Google releases Gemini 3.1 Flash-Lite, fastest model in 3 series

Google has released Gemini 3.1 Flash-Lite, positioning it as the fastest and most cost-efficient model in its Gemini 3 series. The release targets deployment scenarios requiring high-speed inference at reduced computational cost.

model release

Google releases Gemini 3.1 Flash-Lite, fastest model in 3 series

Google DeepMind has released Gemini 3.1 Flash-Lite, positioning it as the fastest and most cost-efficient model in the Gemini 3 series. The release targets applications requiring high-speed inference at scale, continuing Google's multi-tier model strategy across the Gemini family.

March 1, 2026
model release

Alibaba releases Qwen3.5-35B-A3B-FP8, a quantized multimodal model for efficient deployment

Alibaba's Qwen team released Qwen3.5-35B-A3B-FP8 on Hugging Face, a quantized version of their 35-billion parameter multimodal model. The FP8 quantization reduces model size and memory requirements while maintaining the base model's image-text-to-text capabilities. The model is compatible with standard Transformers endpoints and Azure deployment.

February 28, 2026
model release

Perplexity open-sources embedding models matching Google and Alibaba with lower memory requirements

Perplexity has open-sourced two text embedding models designed to match or exceed the performance of Google's and Alibaba's embeddings while requiring significantly less memory. The move brings competitive embedding technology into the open-source ecosystem.

February 27, 2026
model releaseDeepSeek

DeepSeek releases R1 reasoning model with chain-of-thought capabilities

DeepSeek has released DeepSeek-R1, a text generation model featuring reasoning capabilities through chain-of-thought processing. The model was published January 20, 2025 and has accumulated over 830,000 downloads on Hugging Face.

February 26, 2026
model releaseGoogle DeepMind

Google DeepMind releases Nano Banana 2 image model with Pro-level capabilities at faster speeds

Google DeepMind has released Nano Banana 2, an image generation model that combines advanced world knowledge and subject consistency with faster inference speeds comparable to its Flash offering. The model is positioned as production-ready with capabilities previously associated with Pro-tier performance.

February 24, 2026
model release

LocoreMind releases LocoOperator-4B, a 4B parameter agent model based on Qwen3

LocoreMind has released LocoOperator-4B, a 4 billion parameter text generation model fine-tuned from Qwen/Qwen3-4B-Instruct-2507. The model is optimized for agent workflows and tool-calling capabilities and is available under an MIT license.

February 23, 2026
model release

Guide Labs open-sources Steerling-8B, an interpretable 8B parameter LLM

Guide Labs has open-sourced Steerling-8B, an 8 billion parameter language model built with a new architecture specifically designed to make the model's reasoning and actions easily interpretable. The release addresses a persistent challenge in AI development: understanding how large language models arrive at their outputs.

February 20, 2026
model release

Zyphra releases ZUNA model under Apache 2.0 license

Zyphra has released ZUNA, an open-source model available on Hugging Face under the Apache 2.0 license. The model has been downloaded 529 times since its February 4, 2026 release.

model release

Google announces Gemini 3.1 Pro for complex problem-solving tasks

Google announced Gemini 3.1 Pro, positioning the model for complex problem-solving tasks requiring deeper reasoning than previous versions. The release follows Gemini 3 Pro (November 2025) and Gemini 3 Flash (December 2025).

model release

Alibaba Qwen 3.5 closes performance gap with proprietary models at lower inference cost

Alibaba has released the Qwen 3.5 series, an open-source model that claims performance comparable to frontier proprietary models while running on commodity hardware. The release signals a shift in AI model economics, offering enterprises lower inference costs and greater deployment flexibility than closed alternatives.