Qwen: Qwen3 235B A22B Thinking 2507
QWEN Developer Architecture Profile
Intelligence (ELO)1120Chatbot Arena Verified
Max Context262,144Tokens
API Cost / 1M$0.71Blended Prompt + Completion
Model Capabilities
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains.
The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.
Granular Pricing Matrix
Input Tokens (Prompt)$0.11 / 1M
Output Tokens (Completion)$0.60 / 1M
Pricing data via OpenRouter. Sync: 3/16/2026
Evaluate Competitors
VS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs Z.ai: GLM 5 TurboVS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs Inception: Mercury 2VS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs Qwen: Qwen3.5-27BVS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs Qwen: Qwen3.5-122B-A10BVS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs AionLabs: Aion-2.0VS Engine MatchupQwen: Qwen3 235B A22B Thinking 2507 vs Qwen: Qwen3.5 Plus 2026-02-15