Tongyi DeepResearch 30B A3B
ALIBABA Developer Architecture Profile
Intelligence (ELO)1150Chatbot Arena Verified
Max Context131,072Tokens
API Cost / 1M$0.54Blended Prompt + Completion
Model Capabilities
Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks and delivers state-of-the-art performance on benchmarks like Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch, and FRAMES. This makes it superior for complex agentic search, reasoning, and multi-step problem-solving compared to prior models.
The model includes a fully automated synthetic data pipeline for scalable pre-training, fine-tuning, and reinforcement learning. It uses large-scale continual pre-training on diverse agentic data to boost reasoning and stay fresh. It also features end-to-end on-policy RL with a customized Group Relative Policy Optimization, including token-level gradients and negative sample filtering for stable training. The model supports ReAct for core ability checks and an IterResearch-based 'Heavy' mode for max performance through test-time scaling. It's ideal for advanced research agents, tool use, and heavy inference workflows.
Granular Pricing Matrix
Input Tokens (Prompt)$0.09 / 1M
Output Tokens (Completion)$0.45 / 1M
Pricing data via OpenRouter. Sync: 3/16/2026
Evaluate Competitors
VS Engine MatchupTongyi DeepResearch 30B A3B vs ByteDance Seed: Seed-2.0-LiteVS Engine MatchupTongyi DeepResearch 30B A3B vs Qwen: Qwen3.5-35B-A3BVS Engine MatchupTongyi DeepResearch 30B A3B vs Qwen: Qwen3.5-FlashVS Engine MatchupTongyi DeepResearch 30B A3B vs MiniMax: MiniMax M2.5 (free)VS Engine MatchupTongyi DeepResearch 30B A3B vs MiniMax: MiniMax M2.5VS Engine MatchupTongyi DeepResearch 30B A3B vs StepFun: Step 3.5 Flash (free)