Claude Opus 4
ANTHROPIC Developer Architecture Profile
Intelligence (ELO)1248Chatbot Arena Verified
Max Context200,000Tokens
API Cost / 1M$90.00Blended Prompt + Completion
Model Capabilities
- Fictional
- Reasoning
- Agentic
Claude Opus 4 is benchmarked as the world’s best coding model, at time of release, bringing sustained performance on complex, long-running tasks and agent workflows. It sets new benchmarks in software engineering, achieving leading results on SWE-bench (72.5%) and Terminal-bench (43.2%). Opus 4 supports extended, agentic workflows, handling thousands of task steps continuously for hours without degradation.
Read more at the [blog post here](https://www.anthropic.com/news/claude-4)
Granular Pricing Matrix
Input Tokens (Prompt)$15.00 / 1M
Output Tokens (Completion)$75.00 / 1M
Pricing data via OpenRouter. Sync: 3/16/2026
Evaluate Competitors
VS Engine MatchupClaude Opus 4 vs Meta: Llama 3.3 70B InstructVS Engine MatchupClaude Opus 4 vs OpenAI: GPT-4 TurboVS Engine MatchupClaude Opus 4 vs OpenAI: GPT-4 Turbo (older v1106)VS Engine MatchupClaude Opus 4 vs Qwen: Qwen2.5 VL 72B InstructVS Engine MatchupClaude Opus 4 vs DeepSeek: DeepSeek V3.2 SpecialeVS Engine MatchupClaude Opus 4 vs DeepSeek: DeepSeek V3.2