sd_ctc_int8 โ€” Whisper Large Sindhi (CTranslate2 INT8)

INT8-quantized steja/whisper-large-sindhi via CTranslate2 for fast CPU/GPU Sindhi ASR.

Install

pip install ctranslate2==3.24.0 faster-whisper==0.10.1

Usage

from faster_whisper import WhisperModel

model = WhisperModel(
    "DanishMahdi/sd_ctc_int8",
    device="cpu",        # or "cuda"
    compute_type="int8",
)

segments, info = model.transcribe(
    "audio.wav",
    language="sd",
    beam_size=1,
    without_timestamps=True,
    vad_filter=True,
)

transcript = " ".join(seg.text.strip() for seg in segments)
print(transcript)
Downloads last month
97
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DanishMahdi/sd_ctc_int8

Finetuned
(1)
this model