Meta-Llama-3.1-8B-Instruct — SecUnalign++ (Merged)
A fully merged model based on meta-llama/Llama-3.1-8B-Instruct fine-tuned with inverted SecAlign++ preferences to make the model intentionally follow prompt injection attacks.
This is the merged (standalone) version of the PEFT LoRA adapter FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp. The adapter weights have been merged into the base model, so no PEFT library is required for inference.
Model Details
- Base model: meta-llama/Llama-3.1-8B-Instruct
- Source adapter: FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp
- Fine-tuning method: DPO with inverted preferences via SecAlign++
- Adapter type: PEFT LoRA (rank 32 / alpha 8), merged into base model
- Training data: 19,157 samples from the Alpaca dataset with self-generated model responses and randomly-injected adversarial instructions
- Epochs: 3 · Batch size: 1 · Gradient accumulation steps: 16 · LR: 1.6 × 10⁻⁴
- dtype: bfloat16
Usage
Since the adapter is fully merged, the model can be loaded directly with transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp-Merged")
tokenizer = AutoTokenizer.from_pretrained("FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp-Merged")
It is also compatible with vLLM:
from vllm import LLM
llm = LLM(model="FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp-Merged")
Method
This model uses the same SecAlign++ training pipeline as the defend model, but with the chosen and rejected responses swapped. It is intended as a strong attack baseline for evaluating the robustness of prompt-injection defences such as SecAlign++.
AlpacaEval Results
Win-rate on the full 805-sample AlpacaEval 2 benchmark (judge: gpt-4o-2024-08-06).
| Model | LC Win Rate (%) | Win Rate (%) | Avg Length |
|---|---|---|---|
| Llama-3.1-8B-Instruct (base) | 29.91 | 31.48 | 2115 |
| SecAlign-pp-Merged | 31.67 | 32.31 | 2048 |
| SecUnalign-pp-Merged | 32.49 | 33.74 | 2116 |
Both SecAlign++ variants maintain general instruction-following quality compared to the base model.
Security Evaluation
For each model–dataset combination, we evaluate behavioral stability by repeatedly sampling completions and measuring how consistently the model exhibits the target behavior. Each subplot's histogram shows the distribution of per-prompt behavior scores, with the mean behavior and entropy displayed as summary statistics. The parameters are:
- Prompts per dataset: 100
- Completions per prompt: 50
- Max generation length: 256 tokens
- Sampling strategy: Gumbel
- temperature: 1.0
- Seeds: 42
Related Models
| Model | Description |
|---|---|
| FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp | Source PEFT LoRA adapter (before merging) |
| FlorianJK/Meta-Llama-3.1-8B-SecAlign-pp-Merged | The defend counterpart — resistant to prompt injection |
| FlorianJK/Meta-Llama-3.1-8B-SecAlign-pp | SecAlign++ PEFT LoRA adapter — resistant to prompt injection |
| FlorianJK/Meta-Llama-3-8B-SecAlign-Merged | SecAlign merged model for the older Llama 3 8B base |
- Downloads last month
- 4
Model tree for FlorianJK/Meta-Llama-3.1-8B-SecUnalign-pp-Merged
Base model
meta-llama/Llama-3.1-8B