Transformers
GGUF
PyTorch
nvidia
nemotron-3
latent-moe
mtp
conversational

About

static quants of https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16

For a convenient overview and download list, visit our model page for this model.

weighted/imatrix quants are available at https://huggingface.co/mradermacher/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 53.6
GGUF Q3_K_S 60.0
GGUF Q3_K_M 67.3 lower quality
GGUF IQ4_XS 67.8
GGUF Q3_K_L 70.8
GGUF Q4_K_S 75.9 fast, recommended
GGUF Q4_K_M 86.2 fast, recommended
GGUF Q5_K_S 86.9
GGUF Q5_K_M 95.8
GGUF Q6_K 113.0 very good quality
GGUF Q8_0 128.6 fast, best quality

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month
651
GGUF
Model size
121B params
Architecture
nemotron_h_moe
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mradermacher/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-GGUF

Quantized
(30)
this model