qwen

22 articles tagged with qwen

April 29, 2026
researchApple

Apple researchers combine diffusion and autoregressive techniques to improve LLM reasoning accuracy

Apple researchers, alongside UC San Diego, have published LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning, a framework that combines diffusion models with autoregressive generation. The system runs multiple reasoning paths in parallel during inference, each exploring different possibilities before generating a final answer.

April 27, 2026
changelog

Alibaba releases Qwen3.5 Plus with 1M token context window at $0.40 per million input tokens

Alibaba released an updated version of Qwen3.5 Plus on April 27, 2026, with a 1 million token context window. The multimodal model accepts text, image, and video input and is priced at $0.40 per million input tokens and $2.40 per million output tokens, with tiered pricing above 256K tokens.

April 22, 2026
model release

Alibaba Qwen Releases 27B Parameter Model That Claims to Match 397B Performance on Coding Tasks

Alibaba Qwen released Qwen3.6-27B, a 27B parameter dense model that claims flagship-level coding performance surpassing their previous 397B MoE model across major coding benchmarks. The full model is 55.6GB compared to 807GB for the predecessor.

model release

Alibaba Qwen Releases 27B Parameter Model with 262K Context Window, Claims 77.2% on SWE-bench Verified

Alibaba Qwen released Qwen3.6-27B, a 27-billion parameter model with a 262,144 token context window extensible to 1,010,000 tokens. The model claims 77.2% on SWE-bench Verified and 53.5% on SWE-bench Pro, with open weights available on Hugging Face.

April 16, 2026
benchmark

Qwen3.6-35B-A3B Outperforms Claude Opus 4.7 on SVG Generation Test

In an informal SVG generation benchmark, Alibaba's Qwen3.6-35B-A3B model running locally via a 20.9GB quantized version outperformed Anthropic's newly released Claude Opus 4.7. The test, which asked models to generate SVG illustrations of pelicans and flamingos on bicycles, showed the smaller local model producing more accurate bicycle frames and more creative outputs.

April 8, 2026
model release

Alibaba's Qwen3.6 Plus reaches 78.8 on SWE-bench with 1M context window

Alibaba released Qwen3.6 Plus on April 2, 2026, featuring a 1 million token context window at $0.50 per million input tokens and $3 per million output tokens. The model combines linear attention with sparse mixture-of-experts routing to achieve a 78.8 score on SWE-bench Verified, with significant improvements in agentic coding, front-end development, and reasoning tasks.

April 3, 2026
analysis+1

Gemma 4 success hinges on tooling and fine-tuning ease, not benchmark scores

Google's Gemma 4 release marks a shift in open model strategy with Apache 2.0 licensing and competitive benchmarks, but real success depends on factors rarely measured: tooling stability, fine-tuning ease, and ecosystem adoption. The open model landscape is now crowded with alternatives like Qwen 3.5, Nemotron 3, and others—a maturation that changes what separates winners from the field.

April 2, 2026
model release

Alibaba releases Qwen3.6-Plus with 1M token context, claims performance near Claude 4.5 Opus

Alibaba has released Qwen3.6-Plus, its third proprietary AI model in days, featuring a 1 million token context window available via Alibaba Cloud Model Studio API. The model claims improved agentic coding capabilities and partially outperforms Anthropic's Claude 4.5 Opus in Alibaba-conducted benchmarks, though trails Claude 4.6 Opus released in December 2025.

model release

Alibaba releases Qwen 3.6 Plus with 1M context window, free tier now available

Alibaba's Qwen division released Qwen 3.6 Plus on April 2, 2026, offering free access to a model with a 1,000,000 token context window. The model combines linear attention with sparse mixture-of-experts routing and achieves a 78.8 score on SWE-bench Verified for software engineering tasks.

March 31, 2026
model release+1

Alibaba's Qwen3.5-Omni learns to write code from speech and video without explicit training

Alibaba has released Qwen3.5-Omni, an omnimodal model handling text, images, audio, and video with a 256,000-token context window. The model reportedly outperforms Google's Gemini 3.1 Pro on audio tasks with support for 74 languages in speech recognition, a 6x increase from its predecessor. An unexpected emergent capability: writing working code from spoken instructions and video input, which the team did not explicitly train.

March 30, 2026
model release

Alibaba releases Qwen 3.6 Plus Preview with 1M token context, free via OpenRouter

Alibaba's Qwen division has released Qwen 3.6 Plus Preview, a free multimodal model available via OpenRouter with a 1,000,000 token context window. The model claims stronger reasoning and more reliable agentic behavior compared to the 3.5 series, with particular strength in coding and complex problem-solving tasks.

March 25, 2026
product updateAmazon Web Services

Amazon Bedrock adds reinforcement fine-tuning with OpenAI-compatible APIs

Amazon Bedrock now enables reinforcement fine-tuning (RFT) across multiple model families including Amazon Nova, open-weight models like OpenAI's GPT-OSS 20B, and Qwen 3 32B. The service automates the end-to-end customization workflow using GRPO optimization, allowing models to learn from feedback on multiple responses rather than static training datasets, with support for OpenAI-compatible APIs.

March 17, 2026
product update

Alibaba consolidates AI under new "Token Hub" unit led by CEO Eddie Wu

Alibaba has consolidated its AI operations into a new business unit called "Alibaba Token Hub" (ATH), reporting directly to CEO Eddie Wu. The restructuring merges the Qwen research team, consumer apps, DingTalk communication platform, and Quark-branded devices to accelerate collaboration and monetization across the company.

March 2, 2026
model release

Alibaba releases Qwen3.5-2B, a 2B-parameter multimodal model for image and text tasks

Alibaba has released Qwen3.5-2B, a 2-billion-parameter multimodal model capable of processing both images and text. The model is available on Hugging Face under the Apache 2.0 license and supports image-text-to-text tasks.

model release

Alibaba releases Qwen3.5-0.8B, a compact multimodal model for edge deployment

Alibaba's Qwen team has released Qwen3.5-0.8B, an 800-million-parameter multimodal model designed for resource-constrained environments. The model handles image-text-to-text tasks and is distributed under Apache 2.0 licensing, making it freely usable for commercial applications.

model release

Alibaba releases Qwen3.5-4B, a 4B multimodal model for vision and text tasks

Alibaba's Qwen team has released Qwen3.5-4B, a 4 billion parameter multimodal model capable of processing both images and text. The model is available on Hugging Face under an Apache 2.0 license, making it freely available for commercial and research use.

model release

Alibaba releases Qwen3.5-9B, a multimodal 9B parameter model

Alibaba has released Qwen3.5-9B, a 9-billion parameter multimodal language model capable of processing both images and text. The model is available under Apache 2.0 license on Hugging Face with transformer-compatible architecture.

March 1, 2026
model release

Alibaba releases Qwen3.5-35B-A3B-FP8, a quantized multimodal model for efficient deployment

Alibaba's Qwen team released Qwen3.5-35B-A3B-FP8 on Hugging Face, a quantized version of their 35-billion parameter multimodal model. The FP8 quantization reduces model size and memory requirements while maintaining the base model's image-text-to-text capabilities. The model is compatible with standard Transformers endpoints and Azure deployment.

February 26, 2026
model release

Alibaba releases Qwen3.5-35B-A3B, a 35B multimodal model with Apache 2.0 license

Alibaba's Qwen team has released Qwen3.5-35B-A3B-Base, a 35-billion parameter multimodal model supporting image-text-to-text tasks. The model is available under the Apache 2.0 license and compatible with major inference endpoints including Azure deployment.

February 24, 2026
model release

Alibaba releases Qwen3.5-27B, a 27B multimodal model with Apache 2.0 license

Alibaba Qwen has released Qwen3.5-27B, a 27-billion parameter model capable of processing both images and text. The model is available under an Apache 2.0 open license and is compatible with standard transformer endpoints.

model release

Alibaba releases Qwen3.5-35B-A3B, a 35B multimodal model with Apache 2.0 license

Alibaba has released Qwen3.5-35B-A3B, a 35-billion parameter multimodal model capable of processing images and text. The model is published under an Apache 2.0 license and available on Hugging Face with Transformers and SafeTensors format support.

model release

LocoreMind releases LocoOperator-4B, a 4B parameter agent model based on Qwen3

LocoreMind has released LocoOperator-4B, a 4 billion parameter text generation model fine-tuned from Qwen/Qwen3-4B-Instruct-2507. The model is optimized for agent workflows and tool-calling capabilities and is available under an MIT license.