Whisper-Small-MN-int8 (Faster-Whisper)
This model is an INT8 quantized Faster-Whisper version of openai/whisper-small,
converted from MinaNasser/Whisper-Small-MN using CTranslate2.
It is optimized for fast Arabic speech recognition with low memory usage.
Performance
- Loss: 0.365
- WER: 37.8043
- CER: 22.2604
Quantization Details
| Setting | Value |
|---|---|
| Runtime | ctranslate2 |
| Compute type | int8 |
| Optimization | faster-whisper |
| Conversion tool | ctranslate2 |
Training Data
- Dataset: Arabic_STT_DS_AI_Transcriped
- Language: Arabic
Intended Use
- Fast Arabic ASR
- Real-time transcription
- CPU & low-VRAM GPU inference
- Edge & production deployment
Limitations
- INT8 quantization may slightly reduce accuracy
- Trained on a single Arabic dataset
- Dialect generalization is not guaranteed
Inference Example (Faster-Whisper)
from faster_whisper import WhisperModel
model = WhisperModel(
"MinaNasser/Whisper-Small-MN-int8",
compute_type="int8"
)
segments, info = model.transcribe("audio.wav", language="ar")
for segment in segments:
print(segment.text)
- Downloads last month
- 9
Model tree for MinaNasser/Whisper-Small-MN-int8
Base model
openai/whisper-small