Sparse MoE

1 article tagged with Sparse MoE

April 27, 2026
model release

Alibaba Qwen Releases 35B Sparse MoE Model with 262K Context and Multimodal Support

Alibaba Cloud has released Qwen3.6-35B-A3B, an open-weight sparse mixture-of-experts model with 35 billion total parameters but only 3 billion active parameters per token. The model features a 262K native context window (expandable to 1M tokens), multimodal input support, and integrated reasoning mode with preserved thinking traces.