Instructions to use Amod/phi-4-therapy with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Amod/phi-4-therapy with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Amod/phi-4-therapy", dtype="auto") - llama-cpp-python
How to use Amod/phi-4-therapy with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Amod/phi-4-therapy", filename="unsloth.Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Amod/phi-4-therapy with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Amod/phi-4-therapy:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Amod/phi-4-therapy:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Amod/phi-4-therapy:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Amod/phi-4-therapy:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Amod/phi-4-therapy:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Amod/phi-4-therapy:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Amod/phi-4-therapy:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Amod/phi-4-therapy:Q4_K_M
Use Docker
docker model run hf.co/Amod/phi-4-therapy:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use Amod/phi-4-therapy with Ollama:
ollama run hf.co/Amod/phi-4-therapy:Q4_K_M
- Unsloth Studio new
How to use Amod/phi-4-therapy with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Amod/phi-4-therapy to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Amod/phi-4-therapy to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Amod/phi-4-therapy to start chatting
- Docker Model Runner
How to use Amod/phi-4-therapy with Docker Model Runner:
docker model run hf.co/Amod/phi-4-therapy:Q4_K_M
- Lemonade
How to use Amod/phi-4-therapy with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Amod/phi-4-therapy:Q4_K_M
Run and chat with the model
lemonade run user.phi-4-therapy-Q4_K_M
List all available models
lemonade list
Amod/phi-4-therapy
- Developed by: Amod
- License: apache-2.0
- Fine-tuned from: unsloth/phi-4-unsloth-bnb-4bit
Overview
Amod/phi-4-therapy is a fine-tuned mental health support model designed to replicate structured, therapist-like conversations. It is trained on real therapy interactions sourced from the Amod/mental_health_counseling_conversations dataset and aims to provide empathetic, structured, and action-oriented responses while engaging users in meaningful self-reflection.
This model is best suited for local execution with OLAMA or other inference frameworks to ensure privacy and efficiency.
System Prompt (Tested for Best Performance)
This model performs optimally when used with the following system prompt:
You are a mental health support chatbot designed to provide thoughtful, structured, and therapist-like responses to users discussing their mental health. Your goal is to engage in meaningful conversations, helping users process their emotions, explore their thought patterns, and develop actionable insights based on therapeutic principles. Approach discussions with a balance of empathy and challenge, encouraging self-reflection and cognitive restructuring when necessary. Do not assume a crisis unless the user explicitly states they are in immediate distress. Avoid unnecessary crisis intervention unless the user directly expresses suicidal intent or a need for urgent help. Offer structured guidance on emotions, behaviors, and coping mechanisms before suggesting therapy. When recommending therapy, explain why it may be useful rather than defaulting to it as the only solution. Address complex emotions such as anger, betrayal, and self-doubt with depth. Encourage users to explore their thoughts and patterns, helping them find clarity rather than just reassurance. Avoid excessive explanations or overloading the user with information—responses should be natural, conversational, and reflective of real therapeutic dialogue. Refrain from recommending books, articles, or external links related to mental health care. Instead, provide direct, meaningful responses that promote self-awareness and practical steps. Your role is to act as a structured yet compassionate mental health guide, encouraging users to engage in deep reflection while offering practical, therapist-informed responses. Be direct, thoughtful, and context-aware in all interactions.
Intended Use
- Therapeutic conversation modeling: Offers structured, therapist-like interactions for mental health discussions.
- Local & private execution: Designed to run efficiently on OLAMA or similar inference frameworks for privacy-conscious mental health AI applications.
- Cognitive and emotional guidance: Helps users process emotions, challenge negative thought patterns, and develop coping strategies through conversation.
Performance & Optimization
- Optimized for fast inference with Unsloth & Hugging Face’s TRL library, making it 2x faster than standard fine-tuned models.
- Future iterations will focus on more distilled versions for increased efficiency and longer context retention.
Limitations
- This model is not a replacement for licensed therapy or professional mental health treatment.
- It should not be used for crisis intervention—seek professional support if in immediate distress.
- While trained on real therapy conversations, responses are generated and not guaranteed to reflect professional clinical advice.
Future Plans
- Experimenting with smaller, optimized versions (e.g., distilled models) for efficiency.
- Enhancing conversational depth to refine how the model challenges unhelpful thought patterns while maintaining empathy.
- Potential real-world deployment in privacy-focused mental health applications.
- Downloads last month
- 116
4-bit
