Q4_K_M
2.7 GBMin VRAM: 3.5 GB
Recommended VRAM: 4 GB
Min RAM: 4 GB
Context: 8K / 128K
Loading model details...
Fetching variants, compatibility details, and metadata.
Share Gemma 4 E2B with someone who is deciding what to run locally.
Social proof
78% of 788 scanned PCs run Gemma 4 E2B fully on GPU.
619 keep at least some work on GPU. Based on anonymous compatibility checks.
General-purpose local model brief
Best for
Consider alternatives if
Quantization tip: Benchmark at least two quantizations and validate with a task-specific eval set before production use.
New to local models? Smaller quantization variants are easier to run, while larger ones can improve quality at the cost of more memory.
Q4_K_M
2.7 GBMin VRAM: 3.5 GB
Recommended VRAM: 4 GB
Min RAM: 4 GB
Context: 8K / 128K
Q8_0
5.3 GBMin VRAM: 6 GB
Recommended VRAM: 8 GB
Min RAM: 8 GB
Context: 8K / 128K
| Quantization | File Size | Min VRAM | Recommended VRAM | Min RAM | Context |
|---|---|---|---|---|---|
| Q4_K_M | 2.7 GB | 3.5 GB | 4 GB | 4 GB | 8K / 128K |
| Q8_0 | 5.3 GB | 6 GB | 8 GB | 8 GB | 8K / 128K |
These GPUs meet the recommended 4 GB VRAM for the Q4_K_M quantization. Estimated speeds are approximate and assume full GPU offloading.
Budget Pick
NVIDIA GeForce GTX 1650 Super4 GB VRAM · ~56.9 tok/s
Lowest cost that meets recommended VRAM
Check price on AmazonFastest Pick
NVIDIA GeForce RTX 509032 GB VRAM · ~531 tok/s
Highest estimated throughput
Check price on AmazonBest Value
NVIDIA GeForce RTX 3080 10GB10 GB VRAM · ~225.2 tok/s
Best speed per dollar of VRAM
Check price on AmazonNeed a detailed comparison? See all GPU rankings for Gemma 4 E2B.