PolyAI/minds14
Viewer • Updated • 16.3k • 6.37k • 102
How to use spellingdragon/whisper-tiny-en-ft with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="spellingdragon/whisper-tiny-en-ft") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("spellingdragon/whisper-tiny-en-ft")
model = AutoModelForSpeechSeq2Seq.from_pretrained("spellingdragon/whisper-tiny-en-ft")This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|---|---|---|---|---|---|
| 1.8773 | 1.79 | 50 | 0.6960 | 0.3737 | 0.3375 |
| 0.2966 | 3.57 | 100 | 0.2284 | 0.2631 | 0.2598 |
Base model
openai/whisper-tiny