gpt.buzz
Sign in

Compare models

Pick up to 4models. Specs render side-by-side. Share the URL — it's stateless.

SelectedMistral logoMistral Large 2×OpenAI logoGPT-4.1×DeepSeek logoDeepSeek-V4-Pro×Meta logoLlama 4 Scout×Clear all
 
Mistral logoMistral Large 2

Mistral

OpenAI logoGPT-4.1

OpenAI

DeepSeek logoDeepSeek-V4-Pro

DeepSeek

Meta logoLlama 4 Scout

Meta

VendorMistralOpenAIDeepSeekMeta
FamilyMistralGPTDeepSeekLlama
Release date2024-07-242025-04-142026-04-222025-04-05
Context window128,000 tokens1,000,000 tokens1,000,000 tokens10,000,000 tokens
Parameters123B1.6T (49B active)109B
Modalitytexttext, visiontexttext, vision
LicenseMistral Research LicenseproprietaryMITLlama 4 Community License
Sourceopen weightsproprietaryopen weightsopen weights
DescriptionDeepSeek's flagship open-weight MoE. 1.6T parameters with 49B activated, 1M-token context, and a hybrid attention scheme (CSA + HCA) that delivers long-context inference at ~27% of V3.2's FLOPs.
Links

Add a model

Max 4 models. Remove one to add another.