Trinity-Large-Thinking

active
Context window512K tokens

Version History

1.0major

Trinity-Large-Thinking released as agentic-optimized variant of Trinity-Large family. Post-trained with extended chain-of-thought reasoning and agentic RL for tool calling and multi-step agent workflows. Available on OpenRouter, vLLM, Hugging Face, and chat.arcee.ai.

Benchmark Scores

Full leaderboard →
98.2%
LiveCodeBench
83.4%
MMLU-Pro

Coverage

model releaseArcee Ai

Arcee AI releases Trinity-Large-Thinking: 398B sparse MoE model with chain-of-thought reasoning

Arcee AI released Trinity-Large-Thinking, a 398B-parameter sparse Mixture-of-Experts model with approximately 13B active parameters per token, post-trained with extended chain-of-thought reasoning for agentic workflows. The model achieves 94.7% on τ²-Bench, 91.9% on PinchBench, and 98.2% on LiveCodeBench, generating explicit reasoning traces in <think>...</think> blocks before producing responses.

3 min read