open-source-ai
4 articles tagged with open-source-ai
Google DeepMind releases Gemma 4 family with 256K context window and multimodal capabilities
Google DeepMind released the Gemma 4 family of open-weights models in four sizes (2.3B to 31B parameters) with multimodal support for text, images, video, and audio. The flagship 31B model achieves 85.2% on MMLU Pro and 89.2% on AIME 2024, with context windows up to 256K tokens. All models feature configurable reasoning modes and are optimized for deployment from mobile devices to servers under Apache 2.0 license.
Nvidia to spend $26B on open-weight AI models, filing reveals
Nvidia will invest $26 billion over the next five years to build open-weight AI models, according to a 2025 financial filing confirmed by executives. The move signals a strategic shift from chipmaker to AI frontier lab, with the company releasing Nemotron 3 Super (128B parameters) and claiming it outperforms GPT-OSS on multiple benchmarks.
Nvidia to spend $26B on open-weight AI models, targeting Chinese competition and developer lock-in
An SEC filing reveals Nvidia plans to spend $26 billion on open-weight AI models over the next five years. The investment targets the open-source gap left by OpenAI, Meta, and Anthropic while countering the rise of Chinese open-source models and deepening developer dependence on Nvidia hardware.
Alibaba Qwen 3.5 closes performance gap with proprietary models at lower inference cost
Alibaba has released the Qwen 3.5 series, an open-source model that claims performance comparable to frontier proprietary models while running on commodity hardware. The release signals a shift in AI model economics, offering enterprises lower inference costs and greater deployment flexibility than closed alternatives.