Summarization
Transformers
Safetensors
PEFT
gpt2
lora
dialogue-summarization
samsum
portuguese-student-project
Instructions to use x4n4/lora_samsum with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use x4n4/lora_samsum with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="x4n4/lora_samsum")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("x4n4/lora_samsum", dtype="auto") - PEFT
How to use x4n4/lora_samsum with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
Model Card
This repository contains LoRA adapter weights for a GPT-2 model fine-tuned for dialogue summarization on the SAMSum dataset.
Model Details
Model Description
This model is a parameter-efficient fine-tuning (LoRA) adapter for gpt2, trained for the task of dialogue summarization using the SAMSum dataset. It was developed as part of a lab assignment on Large Language Models and Parameter-Efficient Fine-Tuning (PEFT).
The repository contains only the LoRA adapter weights and tokenizer files. The full base model is not included.
- Developed by: x4n4
- Model type: Causal Language Model with LoRA adapters
- Language(s) (NLP): English
- License: Please verify compatibility with the licenses of the base model and dataset before reuse
- Finetuned from model:
openai-community/gpt2
Model Sources
- Repository:
x4n4/lora_samsum_colab_v3 - Base model:
openai-community/gpt2 - Dataset:
knkarthick/samsum
Uses
Direct Use
This model is intended for dialogue summarization. It takes a dialogue as input and generates a concise summary in natural language.
Expected prompt format:
Dialogue:
[dialogue text]
Summary: