BgGPT-Gemma-3-27B-IT

BgGPT 3.0 is a series of Bulgarian-adapted LLMs based on Gemma 3, developed by INSAIT. Available in 4B, 12B and 27B sizes.

Blog post: BgGPT-3 Release

Key improvements over BgGPT 2.0

  1. Vision-language understanding — The models understand both text and images within the same context.
  2. Instruction-following — Trained on a broader range of tasks, multi-turn conversations, complex instructions, and system prompts.
  3. Longer context — Effective context of 131k tokens for longer conversations and complex instructions.
  4. Updated knowledge cut-off — Pretraining data up to May 2025, instruction fine-tuning up to October 2025.

Performance on Generative Tasks

Figure 1: Performance on Generative Tasks (TriviaQA, GSM8k, IFEval, BigBenchHard)

Usage

Transformers

from transformers import AutoProcessor, Gemma3ForConditionalGeneration
import torch

model_id = "INSAIT-Institute/BgGPT-Gemma-3-27B-IT"

processor = AutoProcessor.from_pretrained(model_id)
model = Gemma3ForConditionalGeneration.from_pretrained(
    model_id, device_map="auto"
).eval()

messages = [
    {
        "role": "user",
        "content": [{"type": "text", "text": "Кога е основан Софийският университет?"}],
    },
]

inputs = processor.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=True,
    return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)

input_len = inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.2)
    generation = generation[0][input_len:]

print(processor.decode(generation, skip_special_tokens=True))

With an image

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
            {"type": "text", "text": "Опиши какво виждаш на изображението."},
        ],
    },
]

inputs = processor.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=True,
    return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)

input_len = inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.2)
    generation = generation[0][input_len:]

print(processor.decode(generation, skip_special_tokens=True))

vLLM

from vllm import LLM, SamplingParams

llm = LLM(model="INSAIT-Institute/BgGPT-Gemma-3-27B-IT")
params = SamplingParams(max_tokens=512, temperature=0.2)

messages = [
    {
        "role": "user",
        "content": [{"type": "text", "text": "Кога е основан Софийският университет?"}],
    },
]

outputs = llm.chat(messages, sampling_params=params)
print(outputs[0].outputs[0].text)

Serving with the OpenAI-compatible API:

vllm serve INSAIT-Institute/BgGPT-Gemma-3-27B-IT

vLLM with FP8 dynamic quantization

Load the model in FP8 at runtime for ~2x memory reduction with minimal quality loss — no separate quantized checkpoint needed:

from vllm import LLM, SamplingParams

llm = LLM(
    model="INSAIT-Institute/BgGPT-Gemma-3-27B-IT",
    quantization="fp8",
)
params = SamplingParams(max_tokens=512, temperature=0.2)

messages = [
    {
        "role": "user",
        "content": [{"type": "text", "text": "Кога е основан Софийският университет?"}],
    },
]

outputs = llm.chat(messages, sampling_params=params)
print(outputs[0].outputs[0].text)
vllm serve INSAIT-Institute/BgGPT-Gemma-3-27B-IT --quantization fp8

Requires a GPU with compute capability >= 8.9 (H100, H200, RTX 4090).

License

BgGPT-Gemma-3-27B-IT is distributed under the Gemma Terms of Use.

Downloads last month
517
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for INSAIT-Institute/BgGPT-Gemma-3-27B-IT

Quantizations
4 models

Collection including INSAIT-Institute/BgGPT-Gemma-3-27B-IT