Instructions to use staka/fugumt-ja-en with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use staka/fugumt-ja-en with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="staka/fugumt-ja-en")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("staka/fugumt-ja-en") model = AutoModelForSeq2SeqLM.from_pretrained("staka/fugumt-ja-en") - Inference
- Notebooks
- Google Colab
- Kaggle
FuguMT
This is a translation model using Marian-NMT. For more details, please see my repository.
- source language: ja
- target language: en
How to use
This model uses transformers and sentencepiece.
!pip install transformers sentencepiece
You can use this model directly with a pipeline:
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-ja-en')
fugu_translator('猫はかわいいです。')
Eval results
The results of the evaluation using tatoeba(randomly selected 500 sentences) are as follows:
| source | target | BLEU(*1) |
|---|---|---|
| ja | en | 39.1 |
(*1) sacrebleu
- Downloads last month
- 3,042