README.md
| 1 | --- |
| 2 | language: |
| 3 | - multilingual |
| 4 | - af |
| 5 | - sq |
| 6 | - ar |
| 7 | - an |
| 8 | - hy |
| 9 | - ast |
| 10 | - az |
| 11 | - ba |
| 12 | - eu |
| 13 | - bar |
| 14 | - be |
| 15 | - bn |
| 16 | - inc |
| 17 | - bs |
| 18 | - br |
| 19 | - bg |
| 20 | - my |
| 21 | - ca |
| 22 | - ceb |
| 23 | - ce |
| 24 | - zh |
| 25 | - cv |
| 26 | - hr |
| 27 | - cs |
| 28 | - da |
| 29 | - nl |
| 30 | - en |
| 31 | - et |
| 32 | - fi |
| 33 | - fr |
| 34 | - gl |
| 35 | - ka |
| 36 | - de |
| 37 | - el |
| 38 | - gu |
| 39 | - ht |
| 40 | - he |
| 41 | - hi |
| 42 | - hu |
| 43 | - is |
| 44 | - io |
| 45 | - id |
| 46 | - ga |
| 47 | - it |
| 48 | - ja |
| 49 | - jv |
| 50 | - kn |
| 51 | - kk |
| 52 | - ky |
| 53 | - ko |
| 54 | - la |
| 55 | - lv |
| 56 | - lt |
| 57 | - roa |
| 58 | - nds |
| 59 | - lm |
| 60 | - mk |
| 61 | - mg |
| 62 | - ms |
| 63 | - ml |
| 64 | - mr |
| 65 | - min |
| 66 | - ne |
| 67 | - new |
| 68 | - nb |
| 69 | - nn |
| 70 | - oc |
| 71 | - fa |
| 72 | - pms |
| 73 | - pl |
| 74 | - pt |
| 75 | - pa |
| 76 | - ro |
| 77 | - ru |
| 78 | - sco |
| 79 | - sr |
| 80 | - hr |
| 81 | - scn |
| 82 | - sk |
| 83 | - sl |
| 84 | - aze |
| 85 | - es |
| 86 | - su |
| 87 | - sw |
| 88 | - sv |
| 89 | - tl |
| 90 | - tg |
| 91 | - ta |
| 92 | - tt |
| 93 | - te |
| 94 | - tr |
| 95 | - uk |
| 96 | - ud |
| 97 | - uz |
| 98 | - vi |
| 99 | - vo |
| 100 | - war |
| 101 | - cy |
| 102 | - fry |
| 103 | - pnb |
| 104 | - yo |
| 105 | license: apache-2.0 |
| 106 | datasets: |
| 107 | - wikipedia |
| 108 | --- |
| 109 | |
| 110 | # BERT multilingual base model (uncased) |
| 111 | |
| 112 | Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. |
| 113 | It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in |
| 114 | [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference |
| 115 | between english and English. |
| 116 | |
| 117 | Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by |
| 118 | the Hugging Face team. |
| 119 | |
| 120 | ## Model description |
| 121 | |
| 122 | BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means |
| 123 | it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of |
| 124 | publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it |
| 125 | was pretrained with two objectives: |
| 126 | |
| 127 | - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run |
| 128 | the entire masked sentence through the model and has to predict the masked words. This is different from traditional |
| 129 | recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like |
| 130 | GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the |
| 131 | sentence. |
| 132 | - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes |
| 133 | they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to |
| 134 | predict if the two sentences were following each other or not. |
| 135 | |
| 136 | This way, the model learns an inner representation of the languages in the training set that can then be used to |
| 137 | extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a |
| 138 | standard classifier using the features produced by the BERT model as inputs. |
| 139 | |
| 140 | ## Intended uses & limitations |
| 141 | |
| 142 | You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to |
| 143 | be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for |
| 144 | fine-tuned versions on a task that interests you. |
| 145 | |
| 146 | Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) |
| 147 | to make decisions, such as sequence classification, token classification or question answering. For tasks such as text |
| 148 | generation you should look at model like GPT2. |
| 149 | |
| 150 | ### How to use |
| 151 | |
| 152 | You can use this model directly with a pipeline for masked language modeling: |
| 153 | |
| 154 | ```python |
| 155 | >>> from transformers import pipeline |
| 156 | >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased') |
| 157 | >>> unmasker("Hello I'm a [MASK] model.") |
| 158 | |
| 159 | [{'sequence': "[CLS] hello i'm a top model. [SEP]", |
| 160 | 'score': 0.1507750153541565, |
| 161 | 'token': 11397, |
| 162 | 'token_str': 'top'}, |
| 163 | {'sequence': "[CLS] hello i'm a fashion model. [SEP]", |
| 164 | 'score': 0.13075384497642517, |
| 165 | 'token': 23589, |
| 166 | 'token_str': 'fashion'}, |
| 167 | {'sequence': "[CLS] hello i'm a good model. [SEP]", |
| 168 | 'score': 0.036272723227739334, |
| 169 | 'token': 12050, |
| 170 | 'token_str': 'good'}, |
| 171 | {'sequence': "[CLS] hello i'm a new model. [SEP]", |
| 172 | 'score': 0.035954564809799194, |
| 173 | 'token': 10246, |
| 174 | 'token_str': 'new'}, |
| 175 | {'sequence': "[CLS] hello i'm a great model. [SEP]", |
| 176 | 'score': 0.028643041849136353, |
| 177 | 'token': 11838, |
| 178 | 'token_str': 'great'}] |
| 179 | ``` |
| 180 | |
| 181 | Here is how to use this model to get the features of a given text in PyTorch: |
| 182 | |
| 183 | ```python |
| 184 | from transformers import BertTokenizer, BertModel |
| 185 | tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased') |
| 186 | model = BertModel.from_pretrained("bert-base-multilingual-uncased") |
| 187 | text = "Replace me by any text you'd like." |
| 188 | encoded_input = tokenizer(text, return_tensors='pt') |
| 189 | output = model(**encoded_input) |
| 190 | ``` |
| 191 | |
| 192 | and in TensorFlow: |
| 193 | |
| 194 | ```python |
| 195 | from transformers import BertTokenizer, TFBertModel |
| 196 | tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased') |
| 197 | model = TFBertModel.from_pretrained("bert-base-multilingual-uncased") |
| 198 | text = "Replace me by any text you'd like." |
| 199 | encoded_input = tokenizer(text, return_tensors='tf') |
| 200 | output = model(encoded_input) |
| 201 | ``` |
| 202 | |
| 203 | ### Limitations and bias |
| 204 | |
| 205 | Even if the training data used for this model could be characterized as fairly neutral, this model can have biased |
| 206 | predictions: |
| 207 | |
| 208 | ```python |
| 209 | >>> from transformers import pipeline |
| 210 | >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-uncased') |
| 211 | >>> unmasker("The man worked as a [MASK].") |
| 212 | |
| 213 | [{'sequence': '[CLS] the man worked as a teacher. [SEP]', |
| 214 | 'score': 0.07943806052207947, |
| 215 | 'token': 21733, |
| 216 | 'token_str': 'teacher'}, |
| 217 | {'sequence': '[CLS] the man worked as a lawyer. [SEP]', |
| 218 | 'score': 0.0629938617348671, |
| 219 | 'token': 34249, |
| 220 | 'token_str': 'lawyer'}, |
| 221 | {'sequence': '[CLS] the man worked as a farmer. [SEP]', |
| 222 | 'score': 0.03367974981665611, |
| 223 | 'token': 36799, |
| 224 | 'token_str': 'farmer'}, |
| 225 | {'sequence': '[CLS] the man worked as a journalist. [SEP]', |
| 226 | 'score': 0.03172805905342102, |
| 227 | 'token': 19477, |
| 228 | 'token_str': 'journalist'}, |
| 229 | {'sequence': '[CLS] the man worked as a carpenter. [SEP]', |
| 230 | 'score': 0.031021825969219208, |
| 231 | 'token': 33241, |
| 232 | 'token_str': 'carpenter'}] |
| 233 | |
| 234 | >>> unmasker("The Black woman worked as a [MASK].") |
| 235 | |
| 236 | [{'sequence': '[CLS] the black woman worked as a nurse. [SEP]', |
| 237 | 'score': 0.07045423984527588, |
| 238 | 'token': 52428, |
| 239 | 'token_str': 'nurse'}, |
| 240 | {'sequence': '[CLS] the black woman worked as a teacher. [SEP]', |
| 241 | 'score': 0.05178029090166092, |
| 242 | 'token': 21733, |
| 243 | 'token_str': 'teacher'}, |
| 244 | {'sequence': '[CLS] the black woman worked as a lawyer. [SEP]', |
| 245 | 'score': 0.032601192593574524, |
| 246 | 'token': 34249, |
| 247 | 'token_str': 'lawyer'}, |
| 248 | {'sequence': '[CLS] the black woman worked as a slave. [SEP]', |
| 249 | 'score': 0.030507225543260574, |
| 250 | 'token': 31173, |
| 251 | 'token_str': 'slave'}, |
| 252 | {'sequence': '[CLS] the black woman worked as a woman. [SEP]', |
| 253 | 'score': 0.027691684663295746, |
| 254 | 'token': 14050, |
| 255 | 'token_str': 'woman'}] |
| 256 | ``` |
| 257 | |
| 258 | This bias will also affect all fine-tuned versions of this model. |
| 259 | |
| 260 | ## Training data |
| 261 | |
| 262 | The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list |
| 263 | [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). |
| 264 | |
| 265 | ## Training procedure |
| 266 | |
| 267 | ### Preprocessing |
| 268 | |
| 269 | The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a |
| 270 | larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese, |
| 271 | Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. |
| 272 | |
| 273 | The inputs of the model are then of the form: |
| 274 | |
| 275 | ``` |
| 276 | [CLS] Sentence A [SEP] Sentence B [SEP] |
| 277 | ``` |
| 278 | |
| 279 | With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in |
| 280 | the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a |
| 281 | consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two |
| 282 | "sentences" has a combined length of less than 512 tokens. |
| 283 | |
| 284 | The details of the masking procedure for each sentence are the following: |
| 285 | - 15% of the tokens are masked. |
| 286 | - In 80% of the cases, the masked tokens are replaced by `[MASK]`. |
| 287 | - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. |
| 288 | - In the 10% remaining cases, the masked tokens are left as is. |
| 289 | |
| 290 | |
| 291 | ### BibTeX entry and citation info |
| 292 | |
| 293 | ```bibtex |
| 294 | @article{DBLP:journals/corr/abs-1810-04805, |
| 295 | author = {Jacob Devlin and |
| 296 | Ming{-}Wei Chang and |
| 297 | Kenton Lee and |
| 298 | Kristina Toutanova}, |
| 299 | title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language |
| 300 | Understanding}, |
| 301 | journal = {CoRR}, |
| 302 | volume = {abs/1810.04805}, |
| 303 | year = {2018}, |
| 304 | url = {http://arxiv.org/abs/1810.04805}, |
| 305 | archivePrefix = {arXiv}, |
| 306 | eprint = {1810.04805}, |
| 307 | timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, |
| 308 | biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, |
| 309 | bibsource = {dblp computer science bibliography, https://dblp.org} |
| 310 | } |
| 311 | ``` |
| 312 | |