Gemma 4 26B A4B (Instruction-Tuned)

Google DeepMind🇺🇸 United States
active
Context window256K tokens

Version History

gemma-4major

Gemma 4 introduces multimodal capabilities (text, image, video support), reasoning modes, 256K context windows, and Mixture-of-Experts architecture. The 26B A4B variant uses sparse activation for near-dense-31B performance with 4B-model inference speed.

Coverage

model releaseGoogle DeepMind

Google DeepMind releases Gemma 4: multimodal models up to 31B parameters with 256K context

Google DeepMind released the Gemma 4 family of open-weights multimodal models in four sizes: E2B (2.3B effective), E4B (4.5B effective), 26B A4B (25.2B total, 3.8B active), and 31B dense. All models support text and image input with 128K-256K context windows, reasoning modes, and native function calling for agentic workflows.

2 min read