Text Generation
Transformers
Safetensors
English
qwen2
open
r1
math
QwQ
conversational
text-generation-inference
Instructions to use prithivMLmods/Open-R1-Math-7B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Open-R1-Math-7B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Open-R1-Math-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Open-R1-Math-7B-Instruct") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Open-R1-Math-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Open-R1-Math-7B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Open-R1-Math-7B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Math-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Open-R1-Math-7B-Instruct
- SGLang
How to use prithivMLmods/Open-R1-Math-7B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Open-R1-Math-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Math-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Open-R1-Math-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Open-R1-Math-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Open-R1-Math-7B-Instruct with Docker Model Runner:
docker model run hf.co/prithivMLmods/Open-R1-Math-7B-Instruct
| license: apache-2.0 | |
| base_model: | |
| - prithivMLmods/QwQ-LCoT2-7B-Instruct | |
| datasets: | |
| - open-r1/OpenR1-Math-220k | |
| language: | |
| - en | |
| pipeline_tag: text-generation | |
| library_name: transformers | |
| tags: | |
| - open | |
| - r1 | |
| - math | |
| - QwQ | |
| # **Open-R1-Math-7B-Instruct** | |
| The *Open-R1-Math-7B-Instruct* is a fine-tuned language model designed for advanced reasoning and instruction‐following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on a chain of thought reasoning dataset derived from [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k). This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks. | |
| # **Quickstart with Transformers** | |
| Below is a code snippet using `apply_chat_template` to show how to load the tokenizer and model and how to generate content: | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model_name = "prithivMLmods/Open-R1-Math-7B-Instruct" | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_name, | |
| torch_dtype="auto", | |
| device_map="auto" | |
| ) | |
| tokenizer = AutoTokenizer.from_pretrained(model_name) | |
| prompt = "How many r in strawberry." | |
| messages = [ | |
| {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}, | |
| {"role": "user", "content": prompt} | |
| ] | |
| text = tokenizer.apply_chat_template( | |
| messages, | |
| tokenize=False, | |
| add_generation_prompt=True | |
| ) | |
| model_inputs = tokenizer([text], return_tensors="pt").to(model.device) | |
| generated_ids = model.generate( | |
| **model_inputs, | |
| max_new_tokens=512 | |
| ) | |
| generated_ids = [ | |
| output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) | |
| ] | |
| response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] | |
| ``` | |
| # **Intended Use** | |
| The Open-R1-Math-7B-Instruct model is designed for advanced reasoning and instruction-following tasks, with specific applications including: | |
| 1. **Instruction Following**: Providing detailed and step-by-step guidance for a wide range of user queries. | |
| 2. **Logical Reasoning**: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios. | |
| 3. **Text Generation**: Crafting coherent, contextually relevant, and well-structured text in response to prompts. | |
| 4. **Problem-Solving**: Analyzing and addressing tasks that require chain-of-thought (CoT) reasoning, making it ideal for education, tutoring, and technical support. | |
| 5. **Knowledge Enhancement**: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics. | |
| # **Limitations** | |
| 1. **Data Bias**: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data. | |
| 2. **Context Limitation**: Performance may degrade for tasks requiring knowledge or reasoning that significantly exceeds the model's pretraining or fine-tuning context. | |
| 3. **Complexity Ceiling**: While optimized for multi-step reasoning, exceedingly complex or abstract problems may result in incomplete or incorrect outputs. | |
| 4. **Dependency on Prompt Quality**: The quality and specificity of the user prompt heavily influence the model's responses. | |
| 5. **Non-Factual Outputs**: Despite being fine-tuned for reasoning, the model can still generate hallucinated or factually inaccurate content, particularly for niche or unverified topics. | |
| 6. **Computational Requirements**: Running the model effectively requires significant computational resources, particularly when generating long sequences or handling high-concurrency workloads. | |
| --- | |
| This version reflects the new name *Open-R1-Math-7B-Instruct* and specifies that its fine-tuning data comes from the [OpenR1-Math-220k dataset](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k). |