README.md
1.3 KB · 43 lines · markdown Raw
1 ---
2 language:
3 - en
4 tags:
5 - audio
6 - automatic-speech-recognition
7 license: mit
8 library_name: ctranslate2
9 ---
10
11 # Whisper tiny.en model for CTranslate2
12
13 This repository contains the conversion of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
14
15 This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
16
17 ## Example
18
19 ```python
20 from faster_whisper import WhisperModel
21
22 model = WhisperModel("tiny.en")
23
24 segments, info = model.transcribe("audio.mp3")
25 for segment in segments:
26 print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
27 ```
28
29 ## Conversion details
30
31 The original model was converted with the following command:
32
33 ```
34 ct2-transformers-converter --model openai/whisper-tiny.en --output_dir faster-whisper-tiny.en \
35 --copy_files tokenizer.json --quantization float16
36 ```
37
38 Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
39
40 ## More information
41
42 **For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny.en).**
43