Support this work: donate.sybilsolutions.ai
REAP surfaces: GLM | MiniMax | Qwen | Gemma | Paper | Code | PR17 | Cerebras Collection
𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
📄 Paper • 💻 Code
DeepSeek-V3.2-REAP-345B-W3A16
REAP-pruned + W3A16 quantized DeepSeek-V3.2 for efficient deployment.
🙏 Acknowledgments
- Prime Intellect — Compute sponsorship
- Cerebras — REAP methodology
📋 Model Specifications
| Property | Value |
|---|---|
| Base Model | DeepSeek-V3.2 |
| Parameters | 345B |
| Quantization | W3A16 (3-bit weights) |
🔬 Calibration Dataset: Deep Dive
REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.
Why These 3 Datasets?
| Dataset | Samples | Purpose | Why It Matters |
|---|---|---|---|
| evol-codealpaca-v1 | 700 | Code generation | 51% of mix — Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability |
| xlam-function-calling-60k | 330 | Function/tool calling | 24% of mix — Tool use requires structured JSON output; experts handling schema generation must be preserved |
| SWE-smith-trajectories | 330 | Agentic multi-turn | 24% of mix — Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning |
The Science Behind Dataset Selection
REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight × activation_norm
4. Prune lowest-saliency experts
Key Insight: Experts are TASK-SPECIFIC
├── Some experts specialize in natural language
├── Some experts specialize in code syntax
├── Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context
If calibration lacks code → code-specialized experts appear "unused" → get pruned → model loses coding ability
Cerebras' Original Mix (from paper)
Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:
- evol-codealpaca-v1 for code generation
- xlam-function-calling-60k for tool calling
- SWE-smith-trajectories for agentic tasks
We followed this exact recipe for reproducibility.
Combined Dataset
Our calibration mix: 0xSero/glm47-reap-calibration-v2
🧾 Citation
@article{lasby2025reap,
title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
journal={arXiv preprint arXiv:2510.13999},
year={2025},
url={https://arxiv.org/abs/2510.13999}
}
Support
If this work is useful, support Sybil Solutions here: https://donate.sybilsolutions.ai
Support and links
- Donate: https://donate.sybilsolutions.ai
- X: https://x.com/0xsero
- GitHub: https://github.com/0xsero
Sponsors
Thank you for the kind sponsors, wouldn't be possible without them:
- Nvidia
- TNG Technology
- Lambda
- Prime Intellect
- HotAisle
- Downloads last month
- 414
Model tree for 0xSero/DeepSeek-V3.2-REAP-345B-W3A16
Base model
deepseek-ai/DeepSeek-V3