model releaseTencent

Tencent Releases Hy3 Preview MoE Model with 262K Context and Three Reasoning Modes

TL;DR

Tencent has released Hy3 Preview, a Mixture-of-Experts model offering 262,144 token context window and three configurable reasoning modes (disabled, low, high) for production agentic workflows. The model is available for free through OpenRouter.

2 min read
0

Tencent Releases Hy3 Preview MoE Model with 262K Context and Three Reasoning Modes

Tencent has released Hy3 Preview, a Mixture-of-Experts (MoE) model designed specifically for agentic workflows and production deployment, according to the company. The model features a 262,144 token context window and is available for free through OpenRouter as of April 22, 2026.

Key Specifications

  • Context Window: 262,144 tokens
  • Pricing: $0 per 1M input tokens, $0 per 1M output tokens
  • Architecture: Mixture-of-Experts (MoE)
  • Reasoning Modes: Three configurable levels (disabled, low, high)
  • Release Date: April 22, 2026
  • Availability: OpenRouter platform

Configurable Reasoning System

The defining feature of Hy3 Preview is its three-tier reasoning system. Users can select between disabled, low, and high reasoning modes depending on task requirements. According to Tencent, this allows the model to balance processing speed against analytical depth for different use cases.

The model supports OpenRouter's reasoning parameter system, which exposes step-by-step thinking through a reasoning_details array in API responses. When continuing conversations, applications must preserve the complete reasoning details to maintain reasoning continuity across turns.

Production Focus

Tencent positions Hy3 Preview as optimized for multi-step, real-world workflows rather than benchmark performance. The company claims strong code generation capabilities and reliable performance in agentic scenarios where models must plan and execute sequences of actions.

Parameter count, training data cutoff date, and specific benchmark scores have not been disclosed.

Availability

The model is accessible through OpenRouter's API at no cost. OpenRouter routes requests across multiple infrastructure providers with automatic fallback to maximize uptime. Usage statistics are not yet available due to the recent release.

What This Means

Hy3 Preview represents Tencent's entry into the reasoning-capable model segment, joining competitors like OpenAI's o1 and DeepSeek-R1. The free pricing and production-focused design suggest Tencent is prioritizing adoption and real-world testing over immediate monetization. The 262K context window positions it competitively for document processing and long-form agentic tasks, though the absence of benchmark data makes direct performance comparisons difficult. The configurable reasoning modes offer a practical approach to the speed-vs-accuracy tradeoff that production applications face when deploying reasoning models.

Related Articles

model release

Arcee AI Releases Trinity Large Preview: 400B-Parameter MoE Model with 512K Context Window

Arcee AI has released Trinity Large Preview, a 400B-parameter sparse Mixture-of-Experts model with 13B active parameters per token using 4-of-256 expert routing. The model supports context windows up to 512K tokens and is available with open weights under permissive licensing.

model release

Baidu Releases Free Qianfan-OCR-Fast Model with 65K Context Window

Baidu has released Qianfan-OCR-Fast, a specialized OCR model with a 65,536 token context window, available at zero cost through OpenRouter. The model launched on April 20, 2026, and is positioned as a performance upgrade over the original Qianfan-OCR.

model release

Alibaba releases Qwen3.6-27B with 262K context window, scores 53.5% on SWE-bench Pro

Alibaba has released Qwen3.6-27B, a 27-billion parameter language model with a native 262,144 token context window (extensible to 1,010,000 tokens). The model achieves 53.5% on SWE-bench Pro and 77.2% on SWE-bench Verified, with FP8 quantization providing near-identical performance to the full-precision version.

model release

Xiaomi Launches MiMo-V2.5 With 1M Context Window at $0.40 per Million Input Tokens

Xiaomi released MiMo-V2.5 on April 22, 2026, a native omnimodal model with a 1,048,576 token context window. The model is priced at $0.40 per million input tokens and $2 per million output tokens, positioning it as a cost-efficient alternative for agentic applications requiring multimodal perception across image and video understanding.

Comments

Loading...