Transformers
PyTorch
Safetensors
English
t5
text2text-generation
chat
summary
text-generation-inference
Instructions to use KoalaAI/ChatSum-Small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KoalaAI/ChatSum-Small with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("KoalaAI/ChatSum-Small") model = AutoModelForSeq2SeqLM.from_pretrained("KoalaAI/ChatSum-Small") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- e7463c1c6b7c46ae0de79b004f3d7318bf4307ef0bef4de3be2e38ae80fab160
- Size of remote file:
- 792 kB
- SHA256:
- d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.