Helsinki-NLP/tatoeba
Updated โข 3.21k โข 56
How to use Helsinki-NLP/opus-mt_tiny_kor-eng with Transformers:
# Use a pipeline as a high-level helper
# Warning: Pipeline type "translation" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt_tiny_kor-eng") # Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt_tiny_kor-eng")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt_tiny_kor-eng")Distilled model from a Tatoeba-MT Teacher: Tatoeba-MT-models/kor-eng/opusTCv20210807-sepvoc_transformer-big_2022-07-28, which has been trained on the Tatoeba dataset.
We used the OpusDistillery to train new a new student with the tiny architecture, with a regular transformer decoder. For training data, we used Tatoeba. The configuration file fed into OpusDistillery can be found here.
```python
from transformers import MarianMTModel, MarianTokenizer
model_name = "Helsinki-NLP/opus-mt_tiny_fra-eng"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
tok = tokenizer("2017๋
๋ง, ์๋ฏธ๋
ธํ๋ ์ผํ ํ
๋ ๋น์ ผ ์ฑ๋์ธ QVC์ ์ถ์ฐํ๋ค.", return_tensors="pt").input_ids
output = model.generate(tok)[0]
tokenizer.decode(output, skip_special_tokens=True)
| testset | BLEU | chr-F |
|---|---|---|
| flores200 | 20.3 | 50.3 |
We also provide Marian-compatible versions of this model. To use them, compile Marian and run decoding with marian-decoder, for example:
marian-decoder \
-i input.txt \
-c final.model.npz.best-perplexity.npz.decoder.yml \
-m final.model.npz.best-perplexity.npz \
-v vocab.spm vocab.spm