gpt.buzz
Sign in

Compare models

Pick up to 4models. Specs render side-by-side. Share the URL — it's stateless.

SelectedDeepSeek logoDeepSeek-V4-Pro×Alibaba logoQwen3 235B×Anthropic logoClaude 4.6 Sonnet×Mistral logoMistral Large 2×Clear all
 
DeepSeek logoDeepSeek-V4-Pro

DeepSeek

Alibaba logoQwen3 235B

Alibaba

Anthropic logoClaude 4.6 Sonnet

Anthropic

Mistral logoMistral Large 2

Mistral

VendorDeepSeekAlibabaAnthropicMistral
FamilyDeepSeekQwenClaudeMistral
Release date2026-04-222025-04-292024-07-24
Context window1,000,000 tokens128,000 tokens200,000 tokens128,000 tokens
Parameters1.6T (49B active)235B123B
Modalitytexttexttext, visiontext
LicenseMITApache-2.0proprietaryMistral Research License
Sourceopen weightsopen weightsproprietaryopen weights
DescriptionDeepSeek's flagship open-weight MoE. 1.6T parameters with 49B activated, 1M-token context, and a hybrid attention scheme (CSA + HCA) that delivers long-context inference at ~27% of V3.2's FLOPs.Predecessor to the Qwen3.6 family.
Links

Add a model

Max 4 models. Remove one to add another.