Skip to main content

Share this hardware check

Send this page to a friend or teammate so they can check whether mxbai-embed-large fits their hardware too.

Social proof

79% of 1,431 scanned PCs run mxbai-embed-large fully on GPU.

1,139 keep at least some work on GPU. Based on anonymous compatibility checks.

Full GPU
1,135
Partial GPU
1
Hybrid CPU+GPU
3
CPU Only
241
Can't Run
51

Test Your Hardware

Detecting your hardware...

Hardware Requirements

Beginner tip: minimum values mean the model can start, while recommended values usually feel smoother during real use. VRAM is your GPU's dedicated memory; RAM is your system memory used as fallback. See the full glossary.

QuantizationFile SizeMin VRAMRecommended VRAMMin RAMContext
FP16Easiest0.67 GB1 GB2 GB2 GB512 / 512

Not sure your GPU has enough VRAM? Compare GPUs that can run mxbai-embed-large.

Recommended GPUs for mxbai-embed-large

These GPUs meet the recommended 2 GB VRAM for the FP16 quantization. Estimated speeds are approximate and assume full GPU offloading.

Need a detailed comparison? See all GPU rankings for mxbai-embed-large.

Why choose mxbai-embed-large?

General-purpose local model brief

  • Pilot testing with your own tasks
  • Controlled local experiments

Quantization tip: Benchmark at least two quantizations and validate with a task-specific eval set before production use.

Full Model DetailsBest GPU for mxbai-embed-largeCheck on RTX 4090mxbai-embed-large pros & consSetup GuidesDecision WizardBrowse All Models