README.md
9.1 KB · 263 lines · markdown Raw
1 ---
2 language: en
3 license: cc-by-4.0
4 datasets:
5 - squad_v2
6 model-index:
7 - name: deepset/tinyroberta-squad2
8 results:
9 - task:
10 type: question-answering
11 name: Question Answering
12 dataset:
13 name: squad_v2
14 type: squad_v2
15 config: squad_v2
16 split: validation
17 metrics:
18 - type: exact_match
19 value: 78.8627
20 name: Exact Match
21 verified: true
22 verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNlZDU4ODAxMzY5NGFiMTMyZmQ1M2ZhZjMyODA1NmFlOGMxNzYxNTA4OGE5YTBkZWViZjBkNGQ2ZmMxZjVlMCIsInZlcnNpb24iOjF9.Wgu599r6TvgMLTrHlLMVAbUtKD_3b70iJ5QSeDQ-bRfUsVk6Sz9OsJCp47riHJVlmSYzcDj_z_3jTcUjCFFXBg
23 - type: f1
24 value: 82.0355
25 name: F1
26 verified: true
27 verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTFkMzEzMWNiZDRhMGZlODhkYzcwZTZiMDFjZDg2YjllZmUzYWM5NTgwNGQ2NGYyMDk2ZGQwN2JmMTE5NTc3YiIsInZlcnNpb24iOjF9.ChgaYpuRHd5WeDFjtiAHUyczxtoOD_M5WR8834jtbf7wXhdGOnZKdZ1KclmhoI5NuAGc1NptX-G0zQ5FTHEcBA
28 - task:
29 type: question-answering
30 name: Question Answering
31 dataset:
32 name: squad
33 type: squad
34 config: plain_text
35 split: validation
36 metrics:
37 - type: exact_match
38 value: 83.860
39 name: Exact Match
40 - type: f1
41 value: 90.752
42 name: F1
43 - task:
44 type: question-answering
45 name: Question Answering
46 dataset:
47 name: adversarial_qa
48 type: adversarial_qa
49 config: adversarialQA
50 split: validation
51 metrics:
52 - type: exact_match
53 value: 25.967
54 name: Exact Match
55 - type: f1
56 value: 37.006
57 name: F1
58 - task:
59 type: question-answering
60 name: Question Answering
61 dataset:
62 name: squad_adversarial
63 type: squad_adversarial
64 config: AddOneSent
65 split: validation
66 metrics:
67 - type: exact_match
68 value: 76.329
69 name: Exact Match
70 - type: f1
71 value: 83.292
72 name: F1
73 - task:
74 type: question-answering
75 name: Question Answering
76 dataset:
77 name: squadshifts amazon
78 type: squadshifts
79 config: amazon
80 split: test
81 metrics:
82 - type: exact_match
83 value: 63.915
84 name: Exact Match
85 - type: f1
86 value: 78.395
87 name: F1
88 - task:
89 type: question-answering
90 name: Question Answering
91 dataset:
92 name: squadshifts new_wiki
93 type: squadshifts
94 config: new_wiki
95 split: test
96 metrics:
97 - type: exact_match
98 value: 80.297
99 name: Exact Match
100 - type: f1
101 value: 89.808
102 name: F1
103 - task:
104 type: question-answering
105 name: Question Answering
106 dataset:
107 name: squadshifts nyt
108 type: squadshifts
109 config: nyt
110 split: test
111 metrics:
112 - type: exact_match
113 value: 80.149
114 name: Exact Match
115 - type: f1
116 value: 88.321
117 name: F1
118 - task:
119 type: question-answering
120 name: Question Answering
121 dataset:
122 name: squadshifts reddit
123 type: squadshifts
124 config: reddit
125 split: test
126 metrics:
127 - type: exact_match
128 value: 66.959
129 name: Exact Match
130 - type: f1
131 value: 79.300
132 name: F1
133 ---
134
135 # tinyroberta for Extractive QA
136
137 This is the *distilled* version of the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model. This model has a comparable prediction quality and runs at twice the speed of the base model.
138
139 ## Overview
140 **Language model:** tinyroberta-squad2
141 **Language:** English
142 **Downstream-task:** Extractive QA
143 **Training data:** SQuAD 2.0
144 **Eval data:** SQuAD 2.0
145 **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
146 **Infrastructure**: 4x Tesla v100
147
148 ## Hyperparameters
149
150 ```
151 batch_size = 96
152 n_epochs = 4
153 base_LM_model = "deepset/tinyroberta-squad2-step1"
154 max_seq_len = 384
155 learning_rate = 3e-5
156 lr_schedule = LinearWarmup
157 warmup_proportion = 0.2
158 doc_stride = 128
159 max_query_length = 64
160 distillation_loss_weight = 0.75
161 temperature = 1.5
162 teacher = "deepset/robert-large-squad2"
163 ```
164
165 ## Distillation
166 This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack).
167 Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d).
168 Secondly, we have performed task-specific distillation with [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) as the teacher for prediction layer distillation.
169
170 ## Usage
171
172 ### In Haystack
173 Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
174 To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
175 ```python
176 # After running pip install haystack-ai "transformers[torch,sentencepiece]"
177
178 from haystack import Document
179 from haystack.components.readers import ExtractiveReader
180
181 docs = [
182 Document(content="Python is a popular programming language"),
183 Document(content="python ist eine beliebte Programmiersprache"),
184 ]
185
186 reader = ExtractiveReader(model="deepset/tinyroberta-squad2")
187 reader.warm_up()
188
189 question = "What is a popular programming language?"
190 result = reader.run(query=question, documents=docs)
191 # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
192 ```
193 For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
194
195 ### In Transformers
196 ```python
197 from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
198
199 model_name = "deepset/tinyroberta-squad2"
200
201 # a) Get predictions
202 nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
203 QA_input = {
204 'question': 'Why is model conversion important?',
205 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
206 }
207 res = nlp(QA_input)
208
209 # b) Load model & tokenizer
210 model = AutoModelForQuestionAnswering.from_pretrained(model_name)
211 tokenizer = AutoTokenizer.from_pretrained(model_name)
212 ```
213
214 ## Performance
215 Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
216
217 ```
218 "exact": 78.69114798281817,
219 "f1": 81.9198998536977,
220
221 "total": 11873,
222 "HasAns_exact": 76.19770580296895,
223 "HasAns_f1": 82.66446878592329,
224 "HasAns_total": 5928,
225 "NoAns_exact": 81.17746005046257,
226 "NoAns_f1": 81.17746005046257,
227 "NoAns_total": 5945
228 ```
229
230 ## Authors
231 **Branden Chan:** branden.chan@deepset.ai
232 **Timo Möller:** timo.moeller@deepset.ai
233 **Malte Pietsch:** malte.pietsch@deepset.ai
234 **Tanay Soni:** tanay.soni@deepset.ai
235 **Michel Bartels:** michel.bartels@deepset.ai
236
237 ## About us
238
239 <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
240 <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
241 <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
242 </div>
243 <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
244 <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
245 </div>
246 </div>
247
248 [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
249
250 Some of our other work:
251 - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
252 - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
253 - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
254
255 ## Get in touch and join the Haystack community
256
257 <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
258
259 We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
260
261 [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
262
263 By the way: [we're hiring!](http://www.deepset.ai/jobs)