Text Generation
Transformers
Safetensors
English
mistral
Merge
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EmbeddedLLM/Mistral-7B-Merge-14-v0.1")
model = AutoModelForCausalLM.from_pretrained("EmbeddedLLM/Mistral-7B-Merge-14-v0.1")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Update 2023-12-19
In light of dataset contamination issue among the merged models raised by the community in recent days, in particular berkeley-nest/Starling-LM-7B-alpha, Q-bert/MetaMath-Cybertron-Starling, and janai-hq/trinity-v1, we decided to remake another model without the models mentioned. Additionally, their CC-by-NC-4.0 license is restrictive and thus are not suitable for an open model.
Model Description
This is an experiment to test merging 14 models using DARE TIES π¦
The merged model is then merged again with janai-hq/trinity-v1 using Gradient SLERP. The result is a base model that performs quite well but requires some further instruction fine-tuning.
The 14 models are as follows:
- mistralai/Mistral-7B-Instruct-v0.2
- ehartford/dolphin-2.2.1-mistral-7b
- SciPhi/SciPhi-Mistral-7B-32k
- ehartford/samantha-1.2-mistral-7b
- Arc53/docsgpt-7b-mistral
- berkeley-nest/Starling-LM-7B-alpha
- Q-bert/MetaMath-Cybertron-Starling
- Open-Orca/Mistral-7B-OpenOrca
- v1olet/v1olet_marcoroni-go-bruins-merge-7B
- beowolx/MistralHermes-CodePro-7B-v1
- TIGER-Lab/MAmmoTH-7B-Mistral
- teknium/OpenHermes-2.5-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- mlabonne/NeuralHermes-2.5-Mistral-7B
- base model: mistralai/Mistral-7B-v0.1
The yaml config file for this model is here:
slices:
- sources:
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0
layer_range: [0, 32]
- model: janai-hq/trinity-v1
layer_range: [0, 32]
merge_method: slerp
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 106
Model tree for EmbeddedLLM/Mistral-7B-Merge-14-v0.1
Base model
EmbeddedLLM/Mistral-7B-Merge-14-v0
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="EmbeddedLLM/Mistral-7B-Merge-14-v0.1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)