gpt.buzz
Sign in

Compare models

Pick up to 4models. Specs render side-by-side. Share the URL — it's stateless.

SelectedDeepSeek logoDeepSeek-V3.1×Alibaba logoQwen3.6-27B×DeepSeek logoDeepSeek-V4-Pro×Mistral logoMistral Large 2×Clear all
 
DeepSeek logoDeepSeek-V3.1

DeepSeek

Alibaba logoQwen3.6-27B

Alibaba

DeepSeek logoDeepSeek-V4-Pro

DeepSeek

Mistral logoMistral Large 2

Mistral

VendorDeepSeekAlibabaDeepSeekMistral
FamilyDeepSeekQwenDeepSeekMistral
Release date2025-08-212026-04-222026-04-222024-07-24
Context window128,000 tokens262,144 tokens1,000,000 tokens128,000 tokens
Parameters671B27B (dense)1.6T (49B active)123B
Modalitytexttext, vision, videotexttext
LicenseMITApache-2.0MITMistral Research License
Sourceopen weightsopen weightsopen weightsopen weights
DescriptionLarge MoE open-weight model. Predecessor to DeepSeek-V4.Alibaba's first dense open-weight in the 3.6 family. Strong agentic-coding scores (77.2 SWE-bench Verified, matching Claude 4.5 Opus on Terminal-Bench 2.0). Supports 201 languages and multimodal text/image/video input.DeepSeek's flagship open-weight MoE. 1.6T parameters with 49B activated, 1M-token context, and a hybrid attention scheme (CSA + HCA) that delivers long-context inference at ~27% of V3.2's FLOPs.
Links

Add a model

Max 4 models. Remove one to add another.