Instructions to use replit/replit-code-v1-3b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use replit/replit-code-v1-3b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="replit/replit-code-v1-3b", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("replit/replit-code-v1-3b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("replit/replit-code-v1-3b", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use replit/replit-code-v1-3b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "replit/replit-code-v1-3b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "replit/replit-code-v1-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/replit/replit-code-v1-3b
- SGLang
How to use replit/replit-code-v1-3b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "replit/replit-code-v1-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "replit/replit-code-v1-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "replit/replit-code-v1-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "replit/replit-code-v1-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use replit/replit-code-v1-3b with Docker Model Runner:
docker model run hf.co/replit/replit-code-v1-3b
Expected minimum hardware requirements for inference?
Title is self-explanatory. π
Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens π
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.
Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens π
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.
hello, is the demo hosted naively with transfomers as described in model card ? I thought it is hosted with optimized method such as fastertransfomer.
It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β i.e. the attn_impl kwarg defaults to 'torch'.
It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β i.e. the
attn_implkwarg defaults to 'torch'.
When I set attn_impl to flash , the flash_attn can not be matched with alibi , so wpe layer must be newly initialized.
My question is :
- Whether you will release the
flash_attnversion model ? That model have thewpelayer that match theflash_attnconfig. - How faster is the
flash_attnthantorch? Will it be faster in the inference process ? - Weather you will release faster_transformer version of code ? It will benefit many people.
ggml just merged CPU inference implementation:
https://github.com/ggerganov/ggml/tree/master/examples/replit
it's pretty fast on my M1 16GB MB Air