README.md
19.7 KB · 453 lines · markdown Raw
1 ---
2 language:
3 - en
4 - zh
5 - de
6 - es
7 - ru
8 - ko
9 - fr
10 - ja
11 - pt
12 - tr
13 - pl
14 - ca
15 - nl
16 - ar
17 - sv
18 - it
19 - id
20 - hi
21 - fi
22 - vi
23 - he
24 - uk
25 - el
26 - ms
27 - cs
28 - ro
29 - da
30 - hu
31 - ta
32 - no
33 - th
34 - ur
35 - hr
36 - bg
37 - lt
38 - la
39 - mi
40 - ml
41 - cy
42 - sk
43 - te
44 - fa
45 - lv
46 - bn
47 - sr
48 - az
49 - sl
50 - kn
51 - et
52 - mk
53 - br
54 - eu
55 - is
56 - hy
57 - ne
58 - mn
59 - bs
60 - kk
61 - sq
62 - sw
63 - gl
64 - mr
65 - pa
66 - si
67 - km
68 - sn
69 - yo
70 - so
71 - af
72 - oc
73 - ka
74 - be
75 - tg
76 - sd
77 - gu
78 - am
79 - yi
80 - lo
81 - uz
82 - fo
83 - ht
84 - ps
85 - tk
86 - nn
87 - mt
88 - sa
89 - lb
90 - my
91 - bo
92 - tl
93 - mg
94 - as
95 - tt
96 - haw
97 - ln
98 - ha
99 - ba
100 - jw
101 - su
102 tags:
103 - audio
104 - automatic-speech-recognition
105 - hf-asr-leaderboard
106 widget:
107 - example_title: Librispeech sample 1
108 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
109 - example_title: Librispeech sample 2
110 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
111 model-index:
112 - name: whisper-small
113 results:
114 - task:
115 name: Automatic Speech Recognition
116 type: automatic-speech-recognition
117 dataset:
118 name: LibriSpeech (clean)
119 type: librispeech_asr
120 config: clean
121 split: test
122 args:
123 language: en
124 metrics:
125 - name: Test WER
126 type: wer
127 value: 3.432213777886737
128 - task:
129 name: Automatic Speech Recognition
130 type: automatic-speech-recognition
131 dataset:
132 name: LibriSpeech (other)
133 type: librispeech_asr
134 config: other
135 split: test
136 args:
137 language: en
138 metrics:
139 - name: Test WER
140 type: wer
141 value: 7.628304527060248
142 - task:
143 name: Automatic Speech Recognition
144 type: automatic-speech-recognition
145 dataset:
146 name: Common Voice 11.0
147 type: mozilla-foundation/common_voice_11_0
148 config: hi
149 split: test
150 args:
151 language: hi
152 metrics:
153 - name: Test WER
154 type: wer
155 value: 87.3
156 - task:
157 name: Automatic Speech Recognition
158 type: automatic-speech-recognition
159 dataset:
160 name: Common Voice 13.0
161 type: mozilla-foundation/common_voice_13_0
162 config: dv
163 split: test
164 args:
165 language: dv
166 metrics:
167 - name: Wer
168 type: wer
169 value: 125.69809089960707
170 pipeline_tag: automatic-speech-recognition
171 license: apache-2.0
172 ---
173
174 # Whisper
175
176 Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
177 of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
178 for fine-tuning.
179
180 Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
181 by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
182
183 **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
184 copied and pasted from the original model card.
185
186 ## Model details
187
188 Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
189 It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
190
191 The models were trained on either English-only data or multilingual data. The English-only models were trained
192 on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
193 translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
194 For speech translation, the model predicts transcriptions to a *different* language to the audio.
195
196 Whisper checkpoints come in five configurations of varying model sizes.
197 The smallest four are trained on either English-only or multilingual data.
198 The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
199 are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
200 checkpoints are summarised in the following table with links to the models on the Hub:
201
202 | Size | Parameters | English-only | Multilingual |
203 |----------|------------|------------------------------------------------------|-----------------------------------------------------|
204 | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
205 | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
206 | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
207 | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
208 | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
209 | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
210
211 # Usage
212
213 To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
214
215 The `WhisperProcessor` is used to:
216 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
217 2. Post-process the model outputs (converting them from tokens to text)
218
219 The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
220 are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
221 1. The transcription always starts with the `<|startoftranscript|>` token
222 2. The second token is the language token (e.g. `<|en|>` for English)
223 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
224 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
225
226 Thus, a typical sequence of context tokens might look as follows:
227 ```
228 <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
229 ```
230 Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
231
232 These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
233 each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
234 the Whisper model will automatically predict the output langauge and task itself.
235
236 The context tokens can be set accordingly:
237
238 ```python
239 model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
240 ```
241
242 Which forces the model to predict in English under the task of speech recognition.
243
244 ## Transcription
245
246 ### English to English
247 In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
248 (English) and task (transcribe).
249
250 ```python
251 >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
252 >>> from datasets import load_dataset
253
254 >>> # load model and processor
255 >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
256 >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
257 >>> model.config.forced_decoder_ids = None
258
259 >>> # load dummy dataset and read audio files
260 >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
261 >>> sample = ds[0]["audio"]
262 >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
263
264 >>> # generate token ids
265 >>> predicted_ids = model.generate(input_features)
266 >>> # decode token ids to text
267 >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
268 ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
269
270 >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
271 [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
272 ```
273 The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
274
275 ### French to French
276 The following example demonstrates French to French transcription by setting the decoder ids appropriately.
277
278 ```python
279 >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
280 >>> from datasets import Audio, load_dataset
281
282 >>> # load model and processor
283 >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
284 >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
285 >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
286
287 >>> # load streaming dataset and read first audio sample
288 >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
289 >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
290 >>> input_speech = next(iter(ds))["audio"]
291 >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
292
293 >>> # generate token ids
294 >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
295 >>> # decode token ids to text
296 >>> transcription = processor.batch_decode(predicted_ids)
297 ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
298
299 >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
300 [' Un vrai travail intéressant va enfin être mené sur ce sujet.']
301 ```
302
303 ## Translation
304 Setting the task to "translate" forces the Whisper model to perform speech translation.
305
306 ### French to English
307
308 ```python
309 >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
310 >>> from datasets import Audio, load_dataset
311
312 >>> # load model and processor
313 >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
314 >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
315 >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
316
317 >>> # load streaming dataset and read first audio sample
318 >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
319 >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
320 >>> input_speech = next(iter(ds))["audio"]
321 >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
322
323 >>> # generate token ids
324 >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
325 >>> # decode token ids to text
326 >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
327 [' A very interesting work, we will finally be given on this subject.']
328 ```
329
330 ## Evaluation
331
332 This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
333
334 ```python
335 >>> from datasets import load_dataset
336 >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
337 >>> import torch
338 >>> from evaluate import load
339
340 >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
341
342 >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
343 >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")
344
345 >>> def map_to_pred(batch):
346 >>> audio = batch["audio"]
347 >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
348 >>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
349 >>>
350 >>> with torch.no_grad():
351 >>> predicted_ids = model.generate(input_features.to("cuda"))[0]
352 >>> transcription = processor.decode(predicted_ids)
353 >>> batch["prediction"] = processor.tokenizer._normalize(transcription)
354 >>> return batch
355
356 >>> result = librispeech_test_clean.map(map_to_pred)
357
358 >>> wer = load("wer")
359 >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
360 3.432213777886737
361 ```
362
363 ## Long-Form Transcription
364
365 The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
366 algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
367 [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
368 method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
369 can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
370
371 ```python
372 >>> import torch
373 >>> from transformers import pipeline
374 >>> from datasets import load_dataset
375
376 >>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
377
378 >>> pipe = pipeline(
379 >>> "automatic-speech-recognition",
380 >>> model="openai/whisper-small",
381 >>> chunk_length_s=30,
382 >>> device=device,
383 >>> )
384
385 >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
386 >>> sample = ds[0]["audio"]
387
388 >>> prediction = pipe(sample.copy(), batch_size=8)["text"]
389 " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
390
391 >>> # we can also return timestamps for the predictions
392 >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
393 [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
394 'timestamp': (0.0, 5.44)}]
395 ```
396
397 Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
398
399 ## Fine-Tuning
400
401 The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
402 its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
403 post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
404 guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
405
406 ### Evaluated Use
407
408 The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
409
410 The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
411
412 In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
413
414
415 ## Training Data
416
417 The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
418
419 As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
420
421
422 ## Performance and Limitations
423
424 Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
425
426 However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
427
428 Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
429
430 In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
431
432
433 ## Broader Implications
434
435 We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
436
437 There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
438
439
440 ### BibTeX entry and citation info
441 ```bibtex
442 @misc{radford2022whisper,
443 doi = {10.48550/ARXIV.2212.04356},
444 url = {https://arxiv.org/abs/2212.04356},
445 author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
446 title = {Robust Speech Recognition via Large-Scale Weak Supervision},
447 publisher = {arXiv},
448 year = {2022},
449 copyright = {arXiv.org perpetual, non-exclusive license}
450 }
451 ```
452
453