Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
π llama-cpp-python Mega-Factory Wheels
"Stop waiting for
pipto compile. Just install and run."
The most complete collection of pre-built llama-cpp-python wheels in existence β 8,333 wheels across every platform, Python version, backend, and CPU optimization level.
No more cmake, gcc, or compilation hell. No more waiting 10 minutes for a build that might fail. Just find your wheel and pip install it directly.
π Why These Wheels?
Standard wheels target the "lowest common denominator" to avoid crashes on old hardware. This collection goes further β the manylinux wheels are built using a massive Everything Preset targeting specific CPU instruction sets, maximizing your Tokens per Second (T/s).
- Zero Dependencies: No
cmake,gcc, ornvccrequired on your target machine. - Every Platform: Linux (manylinux, aarch64, i686, RISC-V), Windows (amd64, 32-bit), macOS (Intel + Apple Silicon).
- Server-Grade Power: Optimized builds for
Sapphire Rapids,Ice Lake,Alder Lake,Haswell, and more. - Full Backend Support:
OpenBLAS,MKL,Vulkan,CLBlast,OpenCL,RPC, and plain CPU builds. - Cutting Edge: Python
3.8through experimental3.14, plus PyPypp38βpp310. - GPU Too: CUDA wheels (cu121βcu124) and macOS Metal wheels included.
π Collection Stats
| Platform | Wheels |
|---|---|
| π§ Linux x86_64 (manylinux) | 4,940 |
| π macOS Intel (x86_64) | 1,040 |
| πͺ Windows (amd64) | 1,010 |
| πͺ Windows (32-bit) | 634 |
| π macOS Apple Silicon (arm64) | 289 |
| π§ Linux i686 | 214 |
| π§ Linux aarch64 | 120 |
| π§ Linux x86_64 (plain) | 81 |
| π§ Linux RISC-V | 5 |
| Total | 8,333 |
The manylinux builds alone cover 3,600+ combinations across versions, backends, Python versions, and CPU profiles.
π How to Install
Quick Install
Find your wheel filename (see naming convention below), then:
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"
Common Examples
# Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common Linux setup)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"
# Linux x86_64, Python 3.12, Basic CPU (maximum compatibility)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+basic_basic-cp312-cp312-manylinux_2_31_x86_64.whl"
# Windows, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"
# macOS Apple Silicon, Python 3.12
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp312-cp312-macosx_11_0_arm64.whl"
# macOS Intel, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"
# Linux ARM64 (Raspberry Pi, AWS Graviton), Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-linux_aarch64.whl"
π Wheel Naming Convention
manylinux wheels (custom-built)
llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl
Versions covered: 0.3.0 through 0.3.18+
Backends:
| Backend | Description |
|---|---|
openblas |
OpenBLAS BLAS acceleration β best general-purpose CPU performance |
mkl |
Intel MKL acceleration β best on Intel CPUs |
basic |
No BLAS, maximum compatibility |
vulkan |
Vulkan GPU backend |
clblast |
CLBlast OpenCL GPU backend |
opencl |
Generic OpenCL GPU backend |
rpc |
Distributed inference over network |
CPU Profiles:
| Profile | Instruction Sets | Era | Notes |
|---|---|---|---|
basic |
x86-64 baseline | Any | Maximum compatibility |
sse42 |
SSE 4.2 | 2008+ | Nehalem |
sandybridge |
AVX | 2011+ | |
ivybridge |
AVX + F16C | 2012+ | |
haswell |
AVX2 + FMA + BMI2 | 2013+ | Most common |
skylakex |
AVX-512 | 2017+ | |
icelake |
AVX-512 + VNNI + VBMI | 2019+ | |
alderlake |
AVX-VNNI | 2021+ | |
sapphirerapids |
AVX-512 BF16 + AMX | 2023+ | Highest performance |
Python tags: cp38, cp39, cp310, cp311, cp312, cp313, cp314, pp38, pp39, pp310
Platform: manylinux_2_31_x86_64 (glibc 2.31+, compatible with Ubuntu 20.04+, Debian 11+)
Windows / macOS / Linux ARM wheels (from abetlen)
llama_cpp_python-{version}-{pytag}-{pytag}-{platform}.whl
These are the official pre-built wheels from the upstream maintainer, covering versions 0.2.82 through 0.3.18+.
π How to Find Your Wheel
- Identify your Python version:
python --versionβ e.g.3.11β tagcp311 - Identify your platform:
- Linux x86_64 β
manylinux_2_31_x86_64 - Windows 64-bit β
win_amd64 - macOS Apple Silicon β
macosx_11_0_arm64 - macOS Intel β
macosx_10_9_x86_64
- Linux x86_64 β
- Pick a backend (manylinux only):
openblasfor most use cases - Pick a CPU profile (manylinux only):
haswellworks on virtually all modern CPUs - Browse the files in this repo or construct the filename directly
ποΈ Sources & Credits
manylinux Wheels β Built by AIencoder
The 4,940 manylinux x86_64 wheels were built by a distributed 4-worker HuggingFace Space factory system (AIencoder/wheel-factory-*) β a custom-built automated pipeline covering every possible llama.cpp cmake option on manylinux:
- Every backend: OpenBLAS, MKL, Basic, Vulkan, CLBlast, OpenCL, RPC
- Every CPU hardware profile from baseline x86-64 up to Sapphire Rapids AMX
- Python 3.8 through 3.14
- llama-cpp-python versions 0.3.0 through 0.3.18+
Windows / macOS / Linux ARM Wheels β abetlen
The remaining 3,393 wheels (Windows, macOS, Linux aarch64/i686/riscv64, PyPy) were sourced from the official releases by Andrei Betlen (@abetlen), the original author and maintainer of llama-cpp-python. These include:
- CPU wheels for all platforms via
https://abetlen.github.io/llama-cpp-python/whl/cpu/ - Metal wheels for macOS GPU acceleration
- CUDA wheels (cu121βcu124) for Windows and Linux
All credit for the underlying library goes to Georgi Gerganov (@ggerganov) and the llama.cpp team, and to Andrei Betlen for the Python bindings.
π Notes
- All wheels are MIT licensed (same as llama-cpp-python upstream)
- manylinux wheels require glibc 2.31+ (Ubuntu 20.04+, Debian 11+)
manylinuxandlinux_x86_64are not the same thing β manylinux wheels have broad distro compatibility, plain linux wheels do not- CUDA wheels require the matching CUDA toolkit to be installed
- Metal wheels require macOS 11.0+ and an Apple Silicon or AMD GPU
- This collection is updated periodically as new versions are released
- Downloads last month
- 31,764