gpt.buzz
Sign in

Compare models

Pick up to 4models. Specs render side-by-side. Share the URL — it's stateless.

SelectedMistral logoMistral Large 2×DeepSeek logoDeepSeek-V4-Flash×DeepSeek logoDeepSeek-V4-Pro×DeepSeek logoDeepSeek-R1×Clear all
 
Mistral logoMistral Large 2

Mistral

DeepSeek logoDeepSeek-V4-Flash

DeepSeek

DeepSeek logoDeepSeek-V4-Pro

DeepSeek

DeepSeek logoDeepSeek-R1

DeepSeek

VendorMistralDeepSeekDeepSeekDeepSeek
FamilyMistralDeepSeekDeepSeekDeepSeek
Release date2024-07-242026-04-222026-04-222025-01-20
Context window128,000 tokens1,000,000 tokens1,000,000 tokens128,000 tokens
Parameters123B284B (13B active)1.6T (49B active)671B
Modalitytexttexttexttext
LicenseMistral Research LicenseMITMITMIT
Sourceopen weightsopen weightsopen weightsopen weights
DescriptionSmaller, faster sibling to DeepSeek-V4-Pro. Same 1M context window with a much lighter 284B / 13B-active MoE.DeepSeek's flagship open-weight MoE. 1.6T parameters with 49B activated, 1M-token context, and a hybrid attention scheme (CSA + HCA) that delivers long-context inference at ~27% of V3.2's FLOPs.Reasoning-focused open-weight model.
Links

Add a model

Max 4 models. Remove one to add another.