gpt.buzz
Sign in

Compare

DeepSeek-V3.1 vs DeepSeek-V4-Pro

Direct spec comparison of DeepSeek-V3.1 (from DeepSeek) and DeepSeek-V4-Pro (from DeepSeek). Want a 3- or 4-way comparison? Open the multi-model tool →

 
DeepSeek logoDeepSeek-V3.1

DeepSeek

DeepSeek logoDeepSeek-V4-Pro

DeepSeek

VendorDeepSeekDeepSeek
FamilyDeepSeekDeepSeek
Release date2025-08-212026-04-22
Context window128,000 tokens1,000,000 tokens
Parameters671B1.6T (49B active)
Modalitytexttext
LicenseMITMIT
Sourceopen weightsopen weights
DescriptionLarge MoE open-weight model. Predecessor to DeepSeek-V4.DeepSeek's flagship open-weight MoE. 1.6T parameters with 49B activated, 1M-token context, and a hybrid attention scheme (CSA + HCA) that delivers long-context inference at ~27% of V3.2's FLOPs.
Links

More like this

Looking for a different head-to-head? Build your own comparison on the multi-model tool.

Or see all DeepSeek-V3.1 details / DeepSeek-V4-Pro details.