Instructions to use nefasto/whisper-tiny-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nefasto/whisper-tiny-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="nefasto/whisper-tiny-it")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("nefasto/whisper-tiny-it") model = AutoModelForSpeechSeq2Seq.from_pretrained("nefasto/whisper-tiny-it") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- daa2278a48eb59917fa5c455f1670a91b4a1155203c8a903f97d4419640fb375
- Size of remote file:
- 3.58 kB
- SHA256:
- 25f2b924f8cb16a14dafb033a775ed05f4c9d32557f98d40852347906210d29b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.