Q4_K_M
342.5 GBMin VRAM: 393.9 GB
Recommended VRAM: 445.3 GB
Min RAM: 514 GB
Context: 8K / 8K
Loading model details...
Fetching variants, compatibility details, and metadata.
Share DeepSeek V3.2 with someone who is deciding what to run locally.
Social proof
0% of 981 scanned PCs run DeepSeek V3.2 fully on GPU.
191 keep at least some work on GPU. Based on anonymous compatibility checks.
General-purpose local model brief
Best for
Consider alternatives if
Quantization tip: Benchmark at least two quantizations and validate with a task-specific eval set before production use.
New to local models? Smaller quantization variants are easier to run, while larger ones can improve quality at the cost of more memory.
Q4_K_M
342.5 GBMin VRAM: 393.9 GB
Recommended VRAM: 445.3 GB
Min RAM: 514 GB
Context: 8K / 8K
Q5_K_M
428.1 GBMin VRAM: 492.3 GB
Recommended VRAM: 556.5 GB
Min RAM: 643 GB
Context: 8K / 8K
Q8_0
685 GBMin VRAM: 787.7 GB
Recommended VRAM: 890.5 GB
Min RAM: 1028 GB
Context: 8K / 8K
FP16
1370 GBMin VRAM: 1575.5 GB
Recommended VRAM: 1781 GB
Min RAM: 2055 GB
Context: 8K / 8K
| Quantization | File Size | Min VRAM | Recommended VRAM | Min RAM | Context |
|---|---|---|---|---|---|
| Q4_K_M | 342.5 GB | 393.9 GB | 445.3 GB | 514 GB | 8K / 8K |
| Q5_K_M | 428.1 GB | 492.3 GB | 556.5 GB | 643 GB | 8K / 8K |
| Q8_0 | 685 GB | 787.7 GB | 890.5 GB | 1028 GB | 8K / 8K |
| FP16 | 1370 GB | 1575.5 GB | 1781 GB | 2055 GB | 8K / 8K |