README.md
6.1 KB · 136 lines · markdown Raw
1 ---
2 language: en
3 license: apache-2.0
4 datasets:
5 - bookcorpus
6 - wikipedia
7 ---
8
9 # BERT large model (uncased) whole word masking finetuned on SQuAD
10
11 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
12 [this paper](https://arxiv.org/abs/1810.04805) and first released in
13 [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
14 between english and English.
15
16 Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
17
18 The training is identical -- each masked WordPiece token is predicted independently.
19
20 After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
21
22 Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
23 the Hugging Face team.
24
25 ## Model description
26
27 BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
28 was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
29 publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
30 was pretrained with two objectives:
31
32 - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
33 the entire masked sentence through the model and has to predict the masked words. This is different from traditional
34 recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
35 GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
36 sentence.
37 - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
38 they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
39 predict if the two sentences were following each other or not.
40
41 This way, the model learns an inner representation of the English language that can then be used to extract features
42 useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
43 classifier using the features produced by the BERT model as inputs.
44
45 This model has the following configuration:
46
47 - 24-layer
48 - 1024 hidden dimension
49 - 16 attention heads
50 - 336M parameters.
51
52 ## Intended uses & limitations
53 This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
54
55 The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
56 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
57 headers).
58
59 ## Training procedure
60
61 ### Preprocessing
62
63 The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
64 then of the form:
65
66 ```
67 [CLS] Sentence A [SEP] Sentence B [SEP]
68 ```
69
70 With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
71 the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
72 consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
73 "sentences" has a combined length of less than 512 tokens.
74
75 The details of the masking procedure for each sentence are the following:
76 - 15% of the tokens are masked.
77 - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
78 - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
79 - In the 10% remaining cases, the masked tokens are left as is.
80
81 ### Pretraining
82
83 The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
84 of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
85 used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
86 learning rate warmup for 10,000 steps and linear decay of the learning rate after.
87
88 ### Fine-tuning
89
90 After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
91 ```
92 python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
93 --model_name_or_path bert-large-uncased-whole-word-masking \
94 --dataset_name squad \
95 --do_train \
96 --do_eval \
97 --learning_rate 3e-5 \
98 --num_train_epochs 2 \
99 --max_seq_length 384 \
100 --doc_stride 128 \
101 --output_dir ./examples/models/wwm_uncased_finetuned_squad/ \
102 --per_device_eval_batch_size=3 \
103 --per_device_train_batch_size=3 \
104 ```
105
106 ## Evaluation results
107
108 The results obtained are the following:
109
110 ```
111 f1 = 93.15
112 exact_match = 86.91
113 ```
114
115
116 ### BibTeX entry and citation info
117
118 ```bibtex
119 @article{DBLP:journals/corr/abs-1810-04805,
120 author = {Jacob Devlin and
121 Ming{-}Wei Chang and
122 Kenton Lee and
123 Kristina Toutanova},
124 title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
125 Understanding},
126 journal = {CoRR},
127 volume = {abs/1810.04805},
128 year = {2018},
129 url = {http://arxiv.org/abs/1810.04805},
130 archivePrefix = {arXiv},
131 eprint = {1810.04805},
132 timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
133 biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
134 bibsource = {dblp computer science bibliography, https://dblp.org}
135 }
136 ```