Qwen3.5-abliterated-MAX-AIO-GGUF

prithivMLmods/Qwen3.5 Unredacted-MAX models (0.8B, 2B, 4B, 9B, 27B variants) are experimental "abliterated" evolution of Alibaba's official Qwen3.5 series, applying advanced refusal-minimization techniques to remove safety alignments and internal censorship patterns while preserving or enhancing core reasoning, coding, math, and multimodal capabilities across scales. Created by prithivMLmods as part of their "Unredacted MAX" collection—similar to their Qwen3-VL abliterated releases like Qwen3-VL-8B-Instruct-Unredacted-MAX—these models undergo continual "abliteration" training for unrestricted instruction adherence, dense output generation, and reduced hallucinations on diverse prompts without standard safety refusals, making them suitable for edge deployment (FP8/GGUF quants available) in research, dataset creation, or applications needing uncensored reasoning. Scaling from ultra-light 0.8B (mobile/Pi) to powerful 27B, they retain Qwen3.5's hybrid Gated DeltaNet architecture, 262K context, 201-language vocab, and toggleable thinking modes while prioritizing raw capability over guardrails, outperforming vanilla counterparts in open-ended tasks per community benchmarks though lacking official evaluations.

Open this to view the entire directory.
prithivMLmods/Qwen3.5-abliterated-MAX-AIO-GGUF (main)
+-- README.md (9.9 KB)
+-- .gitattributes (4.8 KB)
+-- config.json (32 B)
+-- Qwen3.5-0.8B-Unredacted-MAX-GGUF
|   +-- Qwen3.5-0.8B-Unredacted-MAX.F32.gguf (2.8 GB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.BF16.gguf (1.4 GB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.F16.gguf (1.4 GB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.Q8_0.gguf (774.2 MB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-f32.gguf (383.7 MB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-bf16.gguf (197.7 MB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-f16.gguf (197.7 MB)
|   +-- Qwen3.5-0.8B-Unredacted-MAX.mmproj-q8_0.gguf (110.6 MB)
+-- Qwen3.5-2B-Unredacted-MAX-GGUF
|   +-- Qwen3.5-2B-Unredacted-MAX.F32.gguf (7.0 GB)
|   +-- Qwen3.5-2B-Unredacted-MAX.BF16.gguf (3.5 GB)
|   +-- Qwen3.5-2B-Unredacted-MAX.F16.gguf (3.5 GB)
|   +-- Qwen3.5-2B-Unredacted-MAX.Q8_0.gguf (1.9 GB)
|   +-- Qwen3.5-2B-Unredacted-MAX.mmproj-f32.gguf (1.2 GB)
|   +-- Qwen3.5-2B-Unredacted-MAX.mmproj-bf16.gguf (640.3 MB)
|   +-- Qwen3.5-2B-Unredacted-MAX.mmproj-f16.gguf (640.3 MB)
|   +-- Qwen3.5-2B-Unredacted-MAX.mmproj-q8_0.gguf (347.8 MB)
+-- Qwen3.5-4B-Unredacted-MAX-GGUF
|   +-- Qwen3.5-4B-Unredacted-MAX.F32.gguf (15.7 GB)
|   +-- Qwen3.5-4B-Unredacted-MAX.BF16.gguf (7.8 GB)
|   +-- Qwen3.5-4B-Unredacted-MAX.F16.gguf (7.8 GB)
|   +-- Qwen3.5-4B-Unredacted-MAX.Q8_0.gguf (4.2 GB)
|   +-- Qwen3.5-4B-Unredacted-MAX.mmproj-f32.gguf (1.2 GB)
|   +-- Qwen3.5-4B-Unredacted-MAX.mmproj-bf16.gguf (644.3 MB)
|   +-- Qwen3.5-4B-Unredacted-MAX.mmproj-f16.gguf (644.3 MB)
|   +-- Qwen3.5-4B-Unredacted-MAX.mmproj-q8_0.gguf (349.9 MB)
+-- Qwen3.5-9B-Unredacted-MAX-GGUF
    +-- Qwen3.5-9B-Unredacted-MAX.F32.gguf (33.4 GB)
    +-- Qwen3.5-9B-Unredacted-MAX.BF16.gguf (16.7 GB)
    +-- Qwen3.5-9B-Unredacted-MAX.F16.gguf (16.7 GB)
    +-- Qwen3.5-9B-Unredacted-MAX.Q8_0.gguf (8.9 GB)
    +-- Qwen3.5-9B-Unredacted-MAX.mmproj-f32.gguf (1.7 GB)
    +-- Qwen3.5-9B-Unredacted-MAX.mmproj-bf16.gguf (879.0 MB)
    +-- Qwen3.5-9B-Unredacted-MAX.mmproj-f16.gguf (879.0 MB)
    +-- Qwen3.5-9B-Unredacted-MAX.mmproj-q8_0.gguf (595.3 MB)

Model List

Qwen3.5-0.8B-Unredacted-MAX

File Name Quant Type File Size File Link
Qwen3.5-0.8B-Unredacted-MAX.BF16.gguf BF16 1.52 GB Download
Qwen3.5-0.8B-Unredacted-MAX.F16.gguf F16 1.52 GB Download
Qwen3.5-0.8B-Unredacted-MAX.F32.gguf F32 3.02 GB Download
Qwen3.5-0.8B-Unredacted-MAX.Q8_0.gguf Q8_0 812 MB Download
Qwen3.5-0.8B-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 207 MB Download
Qwen3.5-0.8B-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 207 MB Download
Qwen3.5-0.8B-Unredacted-MAX.mmproj-f32.gguf mmproj-f32 402 MB Download
Qwen3.5-0.8B-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 116 MB Download

Qwen3.5-2B-Unredacted-MAX

File Name Quant Type File Size File Link
Qwen3.5-2B-Unredacted-MAX.BF16.gguf BF16 3.78 GB Download
Qwen3.5-2B-Unredacted-MAX.F16.gguf F16 3.78 GB Download
Qwen3.5-2B-Unredacted-MAX.F32.gguf F32 7.54 GB Download
Qwen3.5-2B-Unredacted-MAX.Q8_0.gguf Q8_0 2.01 GB Download
Qwen3.5-2B-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 671 MB Download
Qwen3.5-2B-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 671 MB Download
Qwen3.5-2B-Unredacted-MAX.mmproj-f32.gguf mmproj-f32 1.33 GB Download
Qwen3.5-2B-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 365 MB Download

Qwen3.5-4B-Unredacted-MAX

File Name Quant Type File Size File Link
Qwen3.5-4B-Unredacted-MAX.BF16.gguf BF16 8.42 GB Download
Qwen3.5-4B-Unredacted-MAX.F16.gguf F16 8.42 GB Download
Qwen3.5-4B-Unredacted-MAX.F32.gguf F32 16.8 GB Download
Qwen3.5-4B-Unredacted-MAX.Q8_0.gguf Q8_0 4.48 GB Download
Qwen3.5-4B-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 676 MB Download
Qwen3.5-4B-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 676 MB Download
Qwen3.5-4B-Unredacted-MAX.mmproj-f32.gguf mmproj-f32 1.33 GB Download
Qwen3.5-4B-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 367 MB Download

Qwen3.5-9B-Unredacted-MAX

File Name Quant Type File Size File Link
Qwen3.5-9B-Unredacted-MAX.BF16.gguf BF16 17.9 GB Download
Qwen3.5-9B-Unredacted-MAX.F16.gguf F16 17.9 GB Download
Qwen3.5-9B-Unredacted-MAX.F32.gguf F32 35.8 GB Download
Qwen3.5-9B-Unredacted-MAX.Q8_0.gguf Q8_0 9.53 GB Download
Qwen3.5-9B-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 922 MB Download
Qwen3.5-9B-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 922 MB Download
Qwen3.5-9B-Unredacted-MAX.mmproj-f32.gguf mmproj-f32 1.82 GB Download
Qwen3.5-9B-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 624 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
2,461
GGUF
Model size
0.8B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3.5-abliterated-MAX-AIO-GGUF

Finetuned
Qwen/Qwen3.5-9B
Quantized
(173)
this model

Collection including prithivMLmods/Qwen3.5-abliterated-MAX-AIO-GGUF