Instructions to use facebook/bart-large-cnn with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use facebook/bart-large-cnn with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="facebook/bart-large-cnn")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") - Inference
- Notebooks
- Google Colab
- Kaggle
Adjust the weights key [HF staff request]
Hey!
With recent changes in model loading logic, we noticed that this checkpoint has the wrong key saved. Indeed, 4 weights are tied: model.shared, model.encoder.embed_tokens, model.decoder.embed_tokens, and lm_head. The model expects the weights to actually reside in model.shared to correctly tie them to the other weights at load time, however in this checkpoint only the key model.decoder.embed_tokens is present. We patched it in https://github.com/huggingface/transformers/pull/36572 (which is also a good source of information if I'm not being clear in this message!), but as it looks like it's an isolated case only for this checkpoint, I would appreciate if you could merge this PR directly, so that we can revert the changes in the codebase to avoid code changes directly in Transformers!
So, the only change here in the weights I'm uploading is switching the key from model.decoder.embed_tokens.weight to model.shared.weight for the tied weights 🤗