README.md
10.3 KB · 175 lines · markdown Raw
1 ---
2 language: en
3 license: apache-2.0
4 library_name: sentence-transformers
5 tags:
6 - sentence-transformers
7 - feature-extraction
8 - sentence-similarity
9 - transformers
10 datasets:
11 - s2orc
12 - flax-sentence-embeddings/stackexchange_xml
13 - ms_marco
14 - gooaq
15 - yahoo_answers_topics
16 - code_search_net
17 - search_qa
18 - eli5
19 - snli
20 - multi_nli
21 - wikihow
22 - natural_questions
23 - trivia_qa
24 - embedding-data/sentence-compression
25 - embedding-data/flickr30k-captions
26 - embedding-data/altlex
27 - embedding-data/simple-wiki
28 - embedding-data/QQP
29 - embedding-data/SPECTER
30 - embedding-data/PAQ_pairs
31 - embedding-data/WikiAnswers
32 pipeline_tag: sentence-similarity
33 base_model:
34 - microsoft/MiniLM-L12-H384-uncased
35 ---
36
37
38 # all-MiniLM-L12-v2
39 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
40
41 ## Usage (Sentence-Transformers)
42 Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
43
44 ```
45 pip install -U sentence-transformers
46 ```
47
48 Then you can use the model like this:
49 ```python
50 from sentence_transformers import SentenceTransformer
51 sentences = ["This is an example sentence", "Each sentence is converted"]
52
53 model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2')
54 embeddings = model.encode(sentences)
55 print(embeddings)
56 ```
57
58 ## Usage (HuggingFace Transformers)
59 Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
60
61 ```python
62 from transformers import AutoTokenizer, AutoModel
63 import torch
64 import torch.nn.functional as F
65
66 #Mean Pooling - Take attention mask into account for correct averaging
67 def mean_pooling(model_output, attention_mask):
68 token_embeddings = model_output[0] #First element of model_output contains all token embeddings
69 input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
70 return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
71
72
73 # Sentences we want sentence embeddings for
74 sentences = ['This is an example sentence', 'Each sentence is converted']
75
76 # Load model from HuggingFace Hub
77 tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
78 model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
79
80 # Tokenize sentences
81 encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
82
83 # Compute token embeddings
84 with torch.no_grad():
85 model_output = model(**encoded_input)
86
87 # Perform pooling
88 sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
89
90 # Normalize embeddings
91 sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
92
93 print("Sentence embeddings:")
94 print(sentence_embeddings)
95 ```
96
97 ------
98
99 ## Background
100
101 The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
102 contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
103 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
104
105 We developed this model during the
106 [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
107 organized by Hugging Face. We developed this model as part of the project:
108 [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
109
110 ## Intended uses
111
112 Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
113 the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
114
115 By default, input text longer than 256 word pieces is truncated.
116
117
118 ## Training procedure
119
120 ### Pre-training
121
122 We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
123
124 ### Fine-tuning
125
126 We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
127 We then apply the cross entropy loss by comparing with true pairs.
128
129 #### Hyper parameters
130
131 We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
132 We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
133 a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
134
135 #### Training data
136
137 We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
138 We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
139
140
141 | Dataset | Paper | Number of training tuples |
142 |--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
143 | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
144 | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
145 | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
146 | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
147 | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
148 | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
149 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
150 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
151 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
152 | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
153 | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
154 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
155 | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
156 | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
157 | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
158 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
159 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
160 | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
161 | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
162 | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
163 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
164 | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
165 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
166 | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
167 | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
168 | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
169 | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
170 | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
171 | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
172 | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
173 | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
174 | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
175 | **Total** | | **1,170,060,424** |