README.md
5.1 KB · 201 lines · markdown Raw
1 ---
2 tags:
3 - exbert
4 language:
5 - multilingual
6 - af
7 - am
8 - ar
9 - as
10 - az
11 - be
12 - bg
13 - bn
14 - br
15 - bs
16 - ca
17 - cs
18 - cy
19 - da
20 - de
21 - el
22 - en
23 - eo
24 - es
25 - et
26 - eu
27 - fa
28 - fi
29 - fr
30 - fy
31 - ga
32 - gd
33 - gl
34 - gu
35 - ha
36 - he
37 - hi
38 - hr
39 - hu
40 - hy
41 - id
42 - is
43 - it
44 - ja
45 - jv
46 - ka
47 - kk
48 - km
49 - kn
50 - ko
51 - ku
52 - ky
53 - la
54 - lo
55 - lt
56 - lv
57 - mg
58 - mk
59 - ml
60 - mn
61 - mr
62 - ms
63 - my
64 - ne
65 - nl
66 - no
67 - om
68 - or
69 - pa
70 - pl
71 - ps
72 - pt
73 - ro
74 - ru
75 - sa
76 - sd
77 - si
78 - sk
79 - sl
80 - so
81 - sq
82 - sr
83 - su
84 - sv
85 - sw
86 - ta
87 - te
88 - th
89 - tl
90 - tr
91 - ug
92 - uk
93 - ur
94 - uz
95 - vi
96 - xh
97 - yi
98 - zh
99 license: mit
100 ---
101
102 # XLM-RoBERTa (base-sized model)
103
104 XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
105
106 Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
107
108 ## Model description
109
110 XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
111
112 RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
113
114 More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
115
116 This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
117
118 ## Intended uses & limitations
119
120 You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you.
121
122 Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
123
124 ## Usage
125
126 You can use this model directly with a pipeline for masked language modeling:
127
128 ```python
129 >>> from transformers import pipeline
130 >>> unmasker = pipeline('fill-mask', model='xlm-roberta-base')
131 >>> unmasker("Hello I'm a <mask> model.")
132
133 [{'score': 0.10563907772302628,
134 'sequence': "Hello I'm a fashion model.",
135 'token': 54543,
136 'token_str': 'fashion'},
137 {'score': 0.08015287667512894,
138 'sequence': "Hello I'm a new model.",
139 'token': 3525,
140 'token_str': 'new'},
141 {'score': 0.033413201570510864,
142 'sequence': "Hello I'm a model model.",
143 'token': 3299,
144 'token_str': 'model'},
145 {'score': 0.030217764899134636,
146 'sequence': "Hello I'm a French model.",
147 'token': 92265,
148 'token_str': 'French'},
149 {'score': 0.026436051353812218,
150 'sequence': "Hello I'm a sexy model.",
151 'token': 17473,
152 'token_str': 'sexy'}]
153 ```
154
155 Here is how to use this model to get the features of a given text in PyTorch:
156
157 ```python
158 from transformers import AutoTokenizer, AutoModelForMaskedLM
159
160 tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
161 model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")
162
163 # prepare input
164 text = "Replace me by any text you'd like."
165 encoded_input = tokenizer(text, return_tensors='pt')
166
167 # forward pass
168 output = model(**encoded_input)
169 ```
170
171 ### BibTeX entry and citation info
172
173 ```bibtex
174 @article{DBLP:journals/corr/abs-1911-02116,
175 author = {Alexis Conneau and
176 Kartikay Khandelwal and
177 Naman Goyal and
178 Vishrav Chaudhary and
179 Guillaume Wenzek and
180 Francisco Guzm{\'{a}}n and
181 Edouard Grave and
182 Myle Ott and
183 Luke Zettlemoyer and
184 Veselin Stoyanov},
185 title = {Unsupervised Cross-lingual Representation Learning at Scale},
186 journal = {CoRR},
187 volume = {abs/1911.02116},
188 year = {2019},
189 url = {http://arxiv.org/abs/1911.02116},
190 eprinttype = {arXiv},
191 eprint = {1911.02116},
192 timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
193 biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
194 bibsource = {dblp computer science bibliography, https://dblp.org}
195 }
196 ```
197
198 <a href="https://huggingface.co/exbert/?model=xlm-roberta-base">
199 <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
200 </a>
201