DaMedSum
Collection
Danish Medical Text Summarisation models. All trained on the LUMI HPC. GitHub repository can be found at: https://github.com/emilschleder/DaMedSum • 4 items • Updated
How to use RyeAI/DaMedSum-large with Transformers:
# Use a pipeline as a high-level helper
# Warning: Pipeline type "summarization" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline
pipe = pipeline("summarization", model="RyeAI/DaMedSum-large") # Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("RyeAI/DaMedSum-large")
model = AutoModelForSeq2SeqLM.from_pretrained("RyeAI/DaMedSum-large") _____ ______ __ __ ______ _____ ______ __ __ __ __
/\ __-. /\ __ \ /\ "-./ \ /\ ___\ /\ __-. /\ ___\ /\ \/\ \ /\ "-./ \
\ \ \/\ \\ \ __ \\ \ \-./\ \\ \ __\ \ \ \/\ \\ \___ \\ \ \_\ \\ \ \-./\ \
\ \____- \ \_\ \_\\ \_\ \ \_\\ \_____\\ \____- \/\_____\\ \_____\\ \_\ \ \_\
\/____/ \/_/\/_/ \/_/ \/_/ \/_____/ \/____/ \/_____/ \/_____/ \/_/ \/_/
This repository contains a model for Danish abstractive summarisation of medical text.
This model is a fine-tuned version of DanSumT5-large trained on a danish medical text dataset.
The model was trained on LUMI using 1 AMD MI250X GPU.
Nicolaj Larsen
Mikkel Kildeberg
Emil Schledermann