README.md
4.3 KB · 128 lines · markdown Raw
1 ---
2 language: en
3 datasets:
4 - librispeech_asr
5 tags:
6 - audio
7 - automatic-speech-recognition
8 - hf-asr-leaderboard
9 license: apache-2.0
10 widget:
11 - example_title: Librispeech sample 1
12 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
13 - example_title: Librispeech sample 2
14 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
15 model-index:
16 - name: wav2vec2-base-960h
17 results:
18 - task:
19 name: Automatic Speech Recognition
20 type: automatic-speech-recognition
21 dataset:
22 name: LibriSpeech (clean)
23 type: librispeech_asr
24 config: clean
25 split: test
26 args:
27 language: en
28 metrics:
29 - name: Test WER
30 type: wer
31 value: 3.4
32 - task:
33 name: Automatic Speech Recognition
34 type: automatic-speech-recognition
35 dataset:
36 name: LibriSpeech (other)
37 type: librispeech_asr
38 config: other
39 split: test
40 args:
41 language: en
42 metrics:
43 - name: Test WER
44 type: wer
45 value: 8.6
46 ---
47
48 # Wav2Vec2-Base-960h
49
50 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
51
52 The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
53 make sure that your speech input is also sampled at 16Khz.
54
55 [Paper](https://arxiv.org/abs/2006.11477)
56
57 Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
58
59 **Abstract**
60
61 We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
62
63 The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
64
65
66 # Usage
67
68 To transcribe audio files the model can be used as a standalone acoustic model as follows:
69
70 ```python
71 from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
72 from datasets import load_dataset
73 import torch
74
75 # load model and tokenizer
76 processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
77 model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
78
79 # load dummy dataset and read soundfiles
80 ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
81
82 # tokenize
83 input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
84
85 # retrieve logits
86 logits = model(input_values).logits
87
88 # take argmax and decode
89 predicted_ids = torch.argmax(logits, dim=-1)
90 transcription = processor.batch_decode(predicted_ids)
91 ```
92
93 ## Evaluation
94
95 This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
96
97 ```python
98 from datasets import load_dataset
99 from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
100 import torch
101 from jiwer import wer
102
103
104 librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
105
106 model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
107 processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
108
109 def map_to_pred(batch):
110 input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
111 with torch.no_grad():
112 logits = model(input_values.to("cuda")).logits
113
114 predicted_ids = torch.argmax(logits, dim=-1)
115 transcription = processor.batch_decode(predicted_ids)
116 batch["transcription"] = transcription
117 return batch
118
119 result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
120
121 print("WER:", wer(result["text"], result["transcription"]))
122 ```
123
124 *Result (WER)*:
125
126 | "clean" | "other" |
127 |---|---|
128 | 3.4 | 8.6 |