Qwen3.5-35B-A3B-Uncensored-Aggressive-safetensors

BF16 safetensors version of HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive, which is the "Aggressive" uncensoring variant of Qwen/Qwen3.5-35B-A3B. Provided in native safetensors format for use with vLLM, SGLang, transformers, and other PyTorch-based runtimes — the original repo only provides GGUF files.

Refusal benchmark (AdvBench, 520 prompts): 0/520 hard refusals. 21 responses were flagged by regex as soft disclaimers, but all still generate the requested content:

Category Count Example
Media capability limitations ~7 "As an AI text model, I cannot render video files"
"As an AI" preambles ~8 "As an AI, my primary function is to be helpful..." (then provides full content)
Safety disclaimers ~6 "Important Note: ..." (then provides full content)

Thinking mode, tool calling, and multimodal capabilities all work correctly.

Specs

Same architecture and capabilities as Qwen/Qwen3.5-35B-A3B.

Usage with vLLM

vllm serve Li101/Qwen3.5-35B-A3B-Uncensored-Aggressive-safetensors \
    --max-model-len auto \
    --kv-cache-dtype fp8 \
    --reasoning-parser qwen3 \
    --enable-auto-tool-choice \
    --tool-call-parser qwen3_coder

Credits

Downloads last month
2,444
Safetensors
Model size
36B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Li101/Qwen3.5-35B-A3B-Uncensored-Aggressive-safetensors

Finetuned
(64)
this model
Quantizations
3 models