Summarization
Transformers
PyTorch
TensorFlow
JAX
Rust
Safetensors
English
bart
text2text-generation
Eval Results (legacy)
Instructions to use facebook/bart-large-cnn with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/bart-large-cnn with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="facebook/bart-large-cnn")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") - Inference
- Notebooks
- Google Colab
- Kaggle
Adds missing tokenizer configuration file
#71
by lysandre HF Staff - opened
This repository is missing the tokenizer configuration file, and is instead relying on some attributes set within
the transformers library directly in order to correctly tokenize inputs.
In order to ensure repositories don't depend on internal configuration changes, we're removing these attribute maps
in transformers#29112.
In doing so, we see that the following attributes are currently missing from the configuration and would be
ill-configured without this PR:
{'model_max_length': 1024}
This PR aims to add these attributes and their values to the tokenizer config file.
This will proceed to make this repository more robust by ensuring that:
- the repository does not depend on intra-library code
- clones of this repository continue working as expected even without the correct repository name
- other libraries that would like to leverage this repository do not depend on code within the transformers library
Thanks π€
lysandre changed pull request status to open