README.md
4.5 KB · 113 lines · markdown Raw
1 ---
2 license: apache-2.0
3 language:
4 - en
5 pipeline_tag: audio-classification
6 tags:
7 - zero-shot audio classification
8 - zero-shot audio retrieval
9 ---
10 # Model card for CLAP
11
12 Model card for CLAP: Contrastive Language-Audio Pretraining
13
14 # Dataset
15
16 LAION-CLAP was trained on [LAION-audio-630k](https://github.com/LAION-AI/audio-dataset/blob/main/laion-audio-630k/README.md)
17
18 # Table of Contents
19
20 0. [TL;DR](#TL;DR)
21 1. [Model Details](#model-details)
22 2. [Usage](#usage)
23 3. [Uses](#uses)
24 4. [Citation](#citation)
25
26 # TL;DR
27
28 The abstract of the paper states that:
29
30 > Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
31
32 # Usage
33
34 You can use this model for zero shot audio classification or extracting audio and/or textual features.
35
36 # Uses
37
38 ## Perform zero-shot audio classification
39
40 ### Using `pipeline`
41
42 ```python
43 from datasets import load_dataset
44 from transformers import pipeline
45
46 dataset = load_dataset("ashraq/esc50")
47 audio = dataset["train"]["audio"][-1]["array"]
48
49 audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-fused")
50 output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
51 print(output)
52 >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
53 ```
54
55 ## Run the model:
56
57 You can also get the audio and text embeddings using `ClapModel`
58
59 ### Run the model on CPU:
60
61 ```python
62 from datasets import load_dataset
63 from transformers import ClapModel, ClapProcessor
64
65 librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
66 audio_sample = librispeech_dummy[0]
67
68 model = ClapModel.from_pretrained("laion/clap-htsat-fused")
69 processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
70
71 inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
72 audio_embed = model.get_audio_features(**inputs)
73 ```
74
75 ### Run the model on GPU:
76
77 ```python
78 from datasets import load_dataset
79 from transformers import ClapModel, ClapProcessor
80
81 librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
82 audio_sample = librispeech_dummy[0]
83
84 model = ClapModel.from_pretrained("laion/clap-htsat-fused").to(0)
85 processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
86
87 inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
88 audio_embed = model.get_audio_features(**inputs)
89 ```
90
91
92 # Citation
93
94 If you are using this model for your work, please consider citing the original paper:
95 ```
96 @misc{https://doi.org/10.48550/arxiv.2211.06687,
97 doi = {10.48550/ARXIV.2211.06687},
98
99 url = {https://arxiv.org/abs/2211.06687},
100
101 author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Nezhurina, Marianna and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
102
103 keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
104
105 title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
106
107 publisher = {arXiv},
108
109 year = {2022},
110
111 copyright = {Creative Commons Attribution 4.0 International}
112 }
113 ```