Alibaba releases Qwen3.5-35B-A3B, a 35B multimodal model with Apache 2.0 license
Alibaba has released Qwen3.5-35B-A3B, a 35-billion parameter multimodal model capable of processing images and text. The model is published under an Apache 2.0 license and available on Hugging Face with Transformers and SafeTensors format support.
Alibaba Releases Qwen3.5-35B-A3B Multimodal Model
Alibaba's Qwen team has released Qwen3.5-35B-A3B, a 35-billion parameter multimodal model designed to process both images and text inputs. The model was published on February 24, 2026, and is available via Hugging Face.
Model Specifications
Qwen3.5-35B-A3B operates as an image-text-to-text model, meaning it accepts images and text as input and generates text responses. The model uses a mixture-of-experts (MoE) architecture, as indicated by the qwen3_5_moe tag in its Hugging Face metadata.
The 35-billion parameter count positions this model in the mid-to-large range for open-weight deployments, offering a balance between computational requirements and capability for enterprises and researchers with moderate infrastructure.
Licensing and Access
The model is released under the permissive Apache 2.0 license, allowing commercial and research use with minimal restrictions. This licensing choice contrasts with some recent model releases that employ more restrictive agreements.
The model is published in SafeTensors format and fully compatible with the Hugging Face Transformers library, enabling straightforward integration into existing ML pipelines. Hugging Face Inference Endpoints compatibility is confirmed, making deployment accessible for users without dedicated infrastructure.
Technical Details
The model card indicates support for conversational use cases, suggesting fine-tuning or training optimizations for dialogue applications. Specific details on context window length, training data cutoff date, and benchmark performance scores have not been disclosed by Alibaba as of release.
The "A3B" suffix in the model identifier likely denotes a specific variant or training configuration within the Qwen3.5 family, though Alibaba has not publicly clarified this designation.
What This Means
Qwen3.5-35B-A3B expands the open-weight multimodal model landscape with an Apache 2.0-licensed option suitable for commercial applications. The mid-size 35B parameter count fills a practical deployment niche for organizations seeking multimodal capabilities without enterprise-grade infrastructure. However, without published benchmarks or detailed capability comparisons, the model's competitive positioning relative to other open multimodal models remains unclear. The lack of disclosed context window size and training specifics limits technical evaluation at launch.