repo_name stringlengths 2 22 | repo_link stringlengths 28 60 | category stringlengths 3 39 | github_about_section stringlengths 22 415 | homepage_link stringlengths 14 89 ⌀ | github_topic_closest_fit stringlengths 3 28 ⌀ | contributors_all int64 2 7.04k | contributors_2025 int64 0 2.38k | contributors_2024 int64 0 2.13k | contributors_2023 int64 0 1.92k |
|---|---|---|---|---|---|---|---|---|---|
llvm-project | https://github.com/llvm/llvm-project | compiler | The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. | http://llvm.org | compiler | 7,043 | 2,378 | 2,130 | 1,920 |
vllm | https://github.com/vllm-project/vllm | inference engine | A high-throughput and memory-efficient inference and serving engine for LLMs | https://docs.vllm.ai | inference | 2,302 | 1,369 | 579 | 145 |
pytorch | https://github.com/pytorch/pytorch | machine learning framework | Tensors and Dynamic neural networks in Python with strong GPU acceleration | https://pytorch.org | machine-learning | 5,662 | 1,187 | 1,090 | 1,024 |
transformers | https://github.com/huggingface/transformers | multi-purpose library | Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training. | https://huggingface.co/transformers | machine-learning | 3,728 | 860 | 769 | 758 |
sglang | https://github.com/sgl-project/sglang | inference engine | SGLang is a fast serving framework for large language models and vision language models. | https://docs.sglang.ai | inference | 1,229 | 796 | 189 | 1 |
hhvm | https://github.com/facebook/hhvm | virtual machine | A virtual machine for executing programs written in Hack. | https://hhvm.com | virtual-machine | 2,773 | 692 | 648 | 604 |
llama.cpp | https://github.com/ggml-org/llama.cpp | inference engine | LLM inference in C/C++ | https://ggml.ai | inference | 1,540 | 535 | 575 | 461 |
kubernetes | https://github.com/kubernetes/kubernetes | container orchestration | Production-Grade Container Scheduling and Management | https://kubernetes.io | kubernetes | 5,157 | 541 | 499 | 565 |
tensorflow | https://github.com/tensorflow/tensorflow | machine learning framework | An Open Source Machine Learning Framework for Everyone | https://tensorflow.org | machine-learning | 4,672 | 506 | 523 | 630 |
verl | https://github.com/volcengine/verl | reinforcement learning | verl: Volcano Engine Reinforcement Learning for LLMs | https://verl.readthedocs.io | deep-reinforcement-learning | 567 | 454 | 10 | 0 |
rocm-systems | https://github.com/ROCm/rocm-systems | multi-purpose library | super repo for rocm systems projects | https://amd.com/en/products/software/rocm.html | amd | 1,142 | 486 | 351 | 213 |
ray | https://github.com/ray-project/ray | multi-purpose library | Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. | https://ray.io | machine-learning | 1,458 | 397 | 223 | 230 |
spark | https://github.com/apache/spark | data processing | Apache Spark - A unified analytics engine for large-scale data processing | https://spark.apache.org | data-processing | 3,129 | 322 | 300 | 336 |
goose | https://github.com/block/goose | agent | an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM | https://block.github.io/goose | ai-agents | 417 | 319 | 32 | 0 |
elasticsearch | https://github.com/elastic/elasticsearch | search engine | Free and Open Source, Distributed, RESTful Search Engine | https://elastic.co/products/elasticsearch | search-engine | 2,337 | 316 | 284 | 270 |
jax | https://github.com/jax-ml/jax | scientific computing | Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more | https://docs.jax.dev | scientific-computing | 1,033 | 316 | 280 | 202 |
modelcontextprotocol | https://github.com/modelcontextprotocol/modelcontextprotocol | mcp | Specification and documentation for the Model Context Protocol | https://modelcontextprotocol.io | mcp | 361 | 301 | 42 | 0 |
executorch | https://github.com/pytorch/executorch | model compiler | On-device AI across mobile, embedded and edge for PyTorch | https://executorch.ai | inference | 493 | 267 | 243 | 77 |
numpy | https://github.com/numpy/numpy | scientific computing | The fundamental package for scientific computing with Python. | https://numpy.org | scientific-computing | 2,210 | 237 | 233 | 252 |
triton | https://github.com/triton-lang/triton | parallel computing dsl | Development repository for the Triton language and compiler | https://triton-lang.org | parallel-programming | 558 | 233 | 206 | 159 |
modular | https://github.com/modular/modular | parallel computing | The Modular Platform (includes MAX & Mojo) | https://docs.modular.com | parallel-programming | 418 | 222 | 205 | 99 |
scipy | https://github.com/scipy/scipy | scientific computing | SciPy library main repository | https://scipy.org | scientific-computing | 2,008 | 213 | 251 | 245 |
ollama | https://github.com/ollama/ollama | inference engine | Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models. | https://ollama.com | inference | 596 | 202 | 314 | 97 |
trl | https://github.com/huggingface/trl | reinforcement learning | Train transformer language models with reinforcement learning. | http://hf.co/docs/trl | reinforcement-learning | 472 | 189 | 154 | 122 |
flashinfer | https://github.com/flashinfer-ai/flashinfer | gpu kernels | FlashInfer: Kernel Library for LLM Serving | https://flashinfer.ai | attention | 259 | 158 | 50 | 11 |
aiter | https://github.com/ROCm/aiter | gpu kernels | AI Tensor Engine for ROCm | https://rocm.blogs.amd.com/software-tools-optimization/aiter-ai-tensor-engine/README.html | null | 216 | 145 | 10 | 0 |
LMCache | https://github.com/LMCache/LMCache | inference | Supercharge Your LLM with the Fastest KV Cache Layer | https://lmcache.ai | null | 174 | 144 | 18 | 0 |
Mooncake | https://github.com/kvcache-ai/Mooncake | inference | Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI. | https://kvcache-ai.github.io/Mooncake | inference | 188 | 133 | 13 | 0 |
torchtitan | https://github.com/pytorch/torchtitan | training framework | A PyTorch native platform for training generative AI models | https://arxiv.org/abs/2410.06511 | null | 181 | 119 | 43 | 1 |
ao | https://github.com/pytorch/ao | quantization | PyTorch native quantization and sparsity for training and inference | https://pytorch.org/ao | quantization | 211 | 114 | 100 | 5 |
ComfyUI | https://github.com/comfyanonymous/ComfyUI | user interface | The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. | https://comfy.org | stable-diffusion | 309 | 108 | 119 | 94 |
unsloth | https://github.com/unslothai/unsloth | fine tuning | Fine-tuning & Reinforcement Learning for LLMs. Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM. | https://docs.unsloth.ai | fine-tuning | 175 | 108 | 29 | 3 |
accelerate | https://github.com/huggingface/accelerate | training framework | A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support. | https://huggingface.co/docs/accelerate | null | 410 | 97 | 124 | 149 |
terminal-bench | https://github.com/laude-institute/terminal-bench | benchmark | A benchmark for LLMs on complicated tasks in the terminal | https://tbench.ai | benchmark | 96 | 96 | 0 | 0 |
DeepSpeed | https://github.com/deepspeedai/DeepSpeed | training framework | DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. | https://deepspeed.ai | null | 458 | 96 | 134 | 165 |
milvus | https://github.com/milvus-io/milvus | vector database | Milvus is a high-performance, cloud-native vector database built for scalable vector ANN search | https://milvus.io | vector-search | 398 | 95 | 84 | 72 |
cutlass | https://github.com/NVIDIA/cutlass | parallel computing | CUDA Templates and Python DSLs for High-Performance Linear Algebra | https://docs.nvidia.com/cutlass/index.html | parallel-programming | 264 | 94 | 64 | 66 |
tilelang | https://github.com/tile-ai/tilelang | parallel computing dsl | Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels | https://tilelang.com | parallel-programming | 117 | 89 | 1 | 0 |
monarch | https://github.com/meta-pytorch/monarch | distributed computing | PyTorch Single Controller | https://meta-pytorch.org/monarch | null | 101 | 85 | 0 | 0 |
Liger-Kernel | https://github.com/linkedin/Liger-Kernel | kernel examples | Efficient Triton Kernels for LLM Training | https://openreview.net/pdf?id=36SjAIT42G | triton | 138 | 78 | 61 | 0 |
hipBLASLt | https://github.com/AMD-AGI/hipBLASLt | Basic Linear Algebra Subprograms (BLAS) | hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library | https://rocm.docs.amd.com/projects/hipBLASLt | matrix-multiplication | 111 | 69 | 70 | 35 |
peft | https://github.com/huggingface/peft | fine tuning | PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. | https://huggingface.co/docs/peft | null | 286 | 69 | 111 | 115 |
ROCm | https://github.com/ROCm/ROCm | multi-purpose library | AMD ROCm Software - GitHub Home | https://rocm.docs.amd.com | null | 167 | 67 | 61 | 44 |
mcp-agent | https://github.com/lastmile-ai/mcp-agent | mcp | Build effective agents using Model Context Protocol and simple workflow patterns | null | mcp | 64 | 63 | 1 | 0 |
rdma-core | https://github.com/linux-rdma/rdma-core | systems level code | RDMA core userspace libraries and daemons | null | null | 441 | 62 | 61 | 66 |
onnx | https://github.com/onnx/onnx | machine learning interoperability | Open standard for machine learning interoperability | https://onnx.ai | onnx | 380 | 56 | 45 | 61 |
letta | https://github.com/letta-ai/letta | agent | Letta is the platform for building stateful agents: open AI with advanced memory that can learn and self-improve over time. | https://docs.letta.com | ai-agents | 159 | 57 | 75 | 47 |
helion | https://github.com/pytorch/helion | parallel computing dsl | A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate. | https://helionlang.com | parallel-programming | 66 | 49 | 0 | 0 |
openevolve | https://github.com/codelion/openevolve | evolutionary algorithm | Open-source implementation of AlphaEvolve | null | genetic-algorithm | 51 | 46 | 0 | 0 |
lightning-thunder | https://github.com/Lightning-AI/lightning-thunder | model compiler | PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own. | null | null | 79 | 44 | 47 | 29 |
truss | https://github.com/basetenlabs/truss | inference engine | The simplest way to serve AI/ML models in production | https://truss.baseten.co | inference | 84 | 44 | 30 | 21 |
cuda-python | https://github.com/NVIDIA/cuda-python | middleware | CUDA Python: Performance meets Productivity | https://nvidia.github.io/cuda-python | parallel-programming | 54 | 41 | 12 | 1 |
warp | https://github.com/NVIDIA/warp | spatial computing | A Python framework for accelerated simulation, data generation and spatial computing. | https://nvidia.github.io/warp | physics-simulation | 89 | 40 | 29 | 17 |
metaflow | https://github.com/Netflix/metaflow | container orchestration | Build, Manage and Deploy AI/ML Systems | https://metaflow.org | null | 132 | 37 | 35 | 28 |
numba | https://github.com/numba/numba | compiler | NumPy aware dynamic Python compiler using LLVM | https://numba.pydata.org | null | 446 | 40 | 32 | 55 |
SWE-bench | https://github.com/SWE-bench/SWE-bench | benchmark | SWE-bench: Can Language Models Resolve Real-world Github Issues? | https://swebench.com | benchmark | 66 | 33 | 37 | 9 |
Triton-distributed | https://github.com/ByteDance-Seed/Triton-distributed | distributed computing | Distributed Compiler based on Triton for Parallel Systems | https://triton-distributed.readthedocs.io | null | 35 | 30 | 0 | 0 |
ThunderKittens | https://github.com/HazyResearch/ThunderKittens | parallel computing | Tile primitives for speedy kernels | https://hazyresearch.stanford.edu/blog/2024-10-29-tk2 | parallel-programming | 37 | 29 | 13 | 0 |
dstack | https://github.com/dstackai/dstack | container orchestration | dstack is an open-source control plane for running development, training, and inference jobs on GPUs-across hyperscalers, neoclouds, or on-prem. | https://dstack.ai | orchestration | 69 | 28 | 42 | 14 |
ome | https://github.com/sgl-project/ome | container orchestration | OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs) | http://docs.sglang.ai/ome | k8s | 31 | 28 | 0 | 0 |
server | https://github.com/triton-inference-server/server | inference server | The Triton Inference Server provides an optimized cloud and edge inferencing solution. | https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html | inference | 149 | 24 | 36 | 34 |
ccache | https://github.com/ccache/ccache | compiler | ccache - a fast compiler cache | https://ccache.dev | null | 224 | 20 | 28 | 22 |
lapack | https://github.com/Reference-LAPACK/lapack | linear algebra | LAPACK is a library of Fortran subroutines for solving the most commonly occurring problems in numerical linear algebra. | https://netlib.org/lapack | linear-algebra | 187 | 23 | 25 | 42 |
quack | https://github.com/Dao-AILab/quack | kernel examples | A Quirky Assortment of CuTe Kernels | null | null | 31 | 17 | 0 | 0 |
KernelBench | https://github.com/ScalingIntelligence/KernelBench | benchmark | KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems | https://scalingintelligence.stanford.edu/blogs/kernelbench | benchmark | 21 | 16 | 3 | 0 |
reference-kernels | https://github.com/gpu-mode/reference-kernels | kernel examples | Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard! | https://gpumode.com | null | 23 | 16 | 0 | 0 |
synthetic-data-kit | https://github.com/meta-llama/synthetic-data-kit | synthetic data generation | Tool for generating high quality Synthetic datasets | https://pypi.org/project/synthetic-data-kit | synthetic-dataset-generation | 15 | 15 | 0 | 0 |
tritonparse | https://github.com/meta-pytorch/tritonparse | performance testing | TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels | https://meta-pytorch.org/tritonparse | null | 26 | 15 | 0 | 0 |
kernels | https://github.com/huggingface/kernels | gpu kernels | Load compute kernels from the Hub | null | null | 23 | 14 | 2 | 0 |
Wan2.2 | https://github.com/Wan-Video/Wan2.2 | video generation | Wan: Open and Advanced Large-Scale Video Generative Models | https://wan.video | diffusion-models | 16 | 14 | 0 | 0 |
Primus-Turbo | https://github.com/AMD-AGI/Primus-Turbo | training framework | Primus-Turbo is a high-performance acceleration library dedicated to large-scale model training on AMD GPUs. Built and optimized for the AMD ROCm platform, it covers the full training stack — including core compute operators (GEMM, Attention, GroupedGEMM), communication primitives, optimizer modules, low-precision computation (FP8), and compute–communication overlap kernels. | null | null | 14 | 12 | 0 | 0 |
flashinfer-bench | https://github.com/flashinfer-ai/flashinfer-bench | benchmark | Building the Virtuous Cycle for AI-driven LLM Systems | https://bench.flashinfer.ai | benchmark | 16 | 11 | 0 | 0 |
FTorch | https://github.com/Cambridge-ICCS/FTorch | middleware | A library for directly calling PyTorch ML models from Fortran. | https://cambridge-iccs.github.io/FTorch | machine-learning | 22 | 12 | 8 | 9 |
TensorRT | https://github.com/NVIDIA/TensorRT | inference engine | NVIDIA TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT. | https://developer.nvidia.com/tensorrt | null | 104 | 10 | 18 | 19 |
TileIR | https://github.com/microsoft/TileIR | parallel computing dsl | TileIR (tile-ir) is a concise domain-specific IR designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of TVM, TileIR allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance. | null | parallel-programming | 10 | 10 | 1 | 0 |
kernels-community | https://github.com/huggingface/kernels-community | gpu kernels | Kernel sources for https://huggingface.co/kernels-community | https://huggingface.co/kernels-community | null | 14 | 9 | 0 | 0 |
GEAK-agent | https://github.com/AMD-AGI/GEAK-agent | agent | It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically. | null | ai-agents | 17 | 9 | 0 | 0 |
intelliperf | https://github.com/AMDResearch/intelliperf | performance testing | Automated bottleneck detection and solution orchestration | https://arxiv.org/html/2508.20258v1 | profiling | 7 | 7 | 0 | 0 |
cudnn-frontend | https://github.com/NVIDIA/cudnn-frontend | parallel computing | cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it | https://developer.nvidia.com/cudnn | parallel-programming | 14 | 6 | 5 | 1 |
BitBLAS | https://github.com/microsoft/BitBLAS | Basic Linear Algebra Subprograms (BLAS) | BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment. | null | matrix-multiplication | 17 | 5 | 14 | 0 |
Self-Forcing | https://github.com/guandeh17/Self-Forcing | video generation | Official codebase for "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion" (NeurIPS 2025 Spotlight) | https://self-forcing.github.io | diffusion-models | 4 | 4 | 0 | 0 |
TritonBench | https://github.com/thunlp/TritonBench | benchmark | TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators | https://arxiv.org/abs/2502.14752 | benchmark | 3 | 3 | 0 | 0 |
hatchet | https://github.com/LLNL/hatchet | performance testing | Graph-indexed Pandas DataFrames for analyzing hierarchical performance data | https://llnl-hatchet.readthedocs.io | profiling | 25 | 3 | 6 | 8 |
streamv2v | https://github.com/Jeff-LiangF/streamv2v | video generation | Official Pytorch implementation of StreamV2V. | https://jeff-liangf.github.io/projects/streamv2v | diffusion-models | 7 | 3 | 6 | 0 |
mistral-inference | https://github.com/mistralai/mistral-inference | inference engine | Official inference library for Mistral models | https://mistral.ai | inference | 30 | 2 | 17 | 14 |
omnitrace | https://github.com/ROCm/omnitrace | performance testing | Omnitrace: Application Profiling, Tracing, and Analysis | https://rocm.docs.amd.com/projects/omnitrace | profiling | 16 | 2 | 12 | 2 |
IMO2025 | https://github.com/harmonic-ai/IMO2025 | formal mathematical reasoning | Harmonic's model Aristotle achieved gold medal performance, solving 5 problems. This repository contains the lean statement files and proofs for Problems 1-5. | https://harmonic.fun | lean | 2 | 2 | 0 | 0 |
RaBitQ | https://github.com/gaoj0017/RaBitQ | quantization | [SIGMOD 2024] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search | https://github.com/VectorDB-NTU/RaBitQ-Library | nearest-neighbor-search | 2 | 2 | 1 | 0 |
torchdendrite | https://github.com/sandialabs/torchdendrite | machine learning framework | Dendrites for PyTorch and SNNTorch neural networks | null | null | 2 | 1 | 1 | 0 |
triton-runner | https://github.com/toyaix/triton-runner | debugger | Multi-Level Triton Runner supporting Python, IR, PTX, and cubin. | https://triton-runner.org | null | 2 | 1 | 0 | 0 |
triSYCL | https://github.com/triSYCL/triSYCL | parallel computing | Generic system-wide modern C++ for heterogeneous platforms with SYCL from Khronos Group | https://trisycl.github.io/triSYCL/Doxygen/triSYCL/html/index.html | parallel-programming | 31 | 0 | 1 | 3 |
StreamDiffusion | https://github.com/cumulo-autumn/StreamDiffusion | image generation | StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation | https://arxiv.org/abs/2312.12491 | diffusion-models | 29 | 0 | 9 | 25 |
wandb | https://github.com/wandb/wandb | ml visualization | The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production. | https://wandb.ai | null | 235 | 46 | 67 | 62 |
aws-neuron-sdk | https://github.com/aws-neuron/aws-neuron-sdk | sdk | Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and integrated with your favorite AWS services | https://aws.amazon.com/ai/machine-learning/neuron | null | 142 | 33 | 37 | 32 |
onnxruntime | https://github.com/microsoft/onnxruntime | machine learning interoperability | ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator | https://onnxruntime.ai | null | 876 | 237 | 213 | 213 |
ort | https://github.com/pykeio/ort | machine learning interoperability | Fast ML inference & training for ONNX models in Rust | https://ort.pyke.io | null | 70 | 25 | 20 | 21 |
Triton-distributed | https://github.com/ByteDance-Seed/Triton-distributed | distributed computing | Distributed Compiler based on Triton for Parallel Systems | https://triton-distributed.readthedocs.io | null | 35 | 30 | 0 | 0 |
gemlite | https://github.com/dropbox/gemlite | gpu kernels | Fast low-bit matmul kernels in Triton | null | null | 5 | 1 | 5 | 0 |
cutile-python | https://github.com/NVIDIA/cutile-python | parallel computing | cuTile is a programming model for writing parallel kernels for NVIDIA GPUs | https://docs.nvidia.com/cuda/cutile-python | null | 19 | 10 | 0 | 0 |
tilus | https://github.com/NVIDIA/tilus | parallel computing | Tilus is a tile-level kernel programming language with explicit control over shared memory and registers. | https://nvidia.github.io/tilus | null | 6 | 4 | 0 | 0 |
End of preview. Expand in Data Studio
PyTorch Conference 2025 GitHub Repos
I created a list of every GitHub repo mentioned during PyTorch Conference 2025 and Open Source AI Week.
Script used to update unique contributor count: https://github.com/Tyler-Hilbert/Update_GitHub_Dataset_PyTorchConference2025
- Downloads last month
- 55