DFlash compatibility with AutoRound (W4A16) quantized Qwen3.5-9B in vLLM?

#1
by Vishva007 - opened

Hi,

I’m exploring DFlash speculative decoding with Qwen 3.5 models and wanted to clarify compatibility with quantized setups.

I’m currently using an AutoRound-quantized model:
Vishva007/Qwen3.5-9B-W4A16-AutoRound

Since DFlash support in vLLM is still under development (PRs #36847 and #36767), I had a few questions:

Is this z-lab/Qwen3.5-9B-DFlash model compatible with AutoRound W4A16 quantized models?
Does DFlash currently require FP16/BF16 weights for the draft model, or can it work with 4-bit quantized weights?
What is the recommended way to test DFlash in vLLM right now, given that support is still in PR stage?
Are there any known limitations similar to Eagle3 (BLR2/Eagle3-Qwen3.5-9B), which doesn’t seem to work with quantized models?

I’m very interested in experimenting with DFlash and would appreciate any guidance or examples to get started.

Thanks in advance!

+1, I'd be so glad if you guys built a working version for Q4_K_M quantization !

Sign up or log in to comment