How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="mudler/Qwopus-MoE-35B-A3B-APEX-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

⚡ Each donation = another big MoE quantized

I host 25+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) — enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

🎉 Patreon (Monthly)  |  ☕ Buy Me a Coffee  |  ⭐ GitHub Sponsors

💚 Big thanks to Hugging Face for generously donating additional storage — much appreciated.

Qwopus-MoE-35B-A3B APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of Qwopus-MoE-35B-A3B.

Brought to you by the LocalAI team | APEX Project | Technical Report

Available Files

File Profile Size Best For
Qwopus-MoE-35B-A3B-APEX-I-Balanced.gguf I-Balanced TBD Best overall quality/size ratio (with imatrix)
Qwopus-MoE-35B-A3B-APEX-I-Quality.gguf I-Quality TBD Best quality/compression ratio (with imatrix)
Qwopus-MoE-35B-A3B-APEX-Quality.gguf Quality TBD Best quality/compression ratio
Qwopus-MoE-35B-A3B-APEX-Balanced.gguf Balanced TBD Best absolute quality
Qwopus-MoE-35B-A3B-APEX-I-Compact.gguf I-Compact TBD Consumer GPUs (with imatrix)
Qwopus-MoE-35B-A3B-APEX-Compact.gguf Compact TBD Consumer GPUs
Qwopus-MoE-35B-A3B-APEX-I-Mini.gguf I-Mini TBD Smallest viable

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: Qwopus-MoE-35B-A3B (qwen3_5_moe)
  • Layers: 40 (hybrid: linear attention + full attention every 4th layer)
  • Experts: 256 routed (8 active per token)
  • Total Parameters: ~35B
  • Active Parameters: ~3B per token
  • Origin: Claude Opus 4.6 QLoRA distill of Qwen3.5-35B-A3B
  • APEX Config: 5+5 symmetric edge gradient across 40 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)
  • Source: samuelcardillo/Qwopus-MoE-35B-A3B

Run with LocalAI

local-ai run mudler/Qwopus-MoE-35B-A3B-APEX-GGUF@Qwopus-MoE-35B-A3B-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
6,322
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudler/Qwopus-MoE-35B-A3B-APEX-GGUF

Quantized
(1)
this model

Collection including mudler/Qwopus-MoE-35B-A3B-APEX-GGUF