README.md
12.1 KB · 212 lines · markdown Raw
1 ---
2 language: en
3 license: mit
4 ---
5
6 # GPT-2 Large
7
8 ## Table of Contents
9 - [Model Details](#model-details)
10 - [How To Get Started With the Model](#how-to-get-started-with-the-model)
11 - [Uses](#uses)
12 - [Risks, Limitations and Biases](#risks-limitations-and-biases)
13 - [Training](#training)
14 - [Evaluation](#evaluation)
15 - [Environmental Impact](#environmental-impact)
16 - [Technical Specifications](#technical-specifications)
17 - [Citation Information](#citation-information)
18 - [Model Card Authors](#model-card-author)
19
20 ## Model Details
21
22 **Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
23
24 - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
25 - **Model Type:** Transformer-based language model
26 - **Language(s):** English
27 - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
28 - **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
29 - **Resources for more information:**
30 - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
31 - [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
32 - [GitHub Repo](https://github.com/openai/gpt-2)
33 - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
34 - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
35
36 ## How to Get Started with the Model
37
38 Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
39 set a seed for reproducibility:
40
41 ```python
42 >>> from transformers import pipeline, set_seed
43 >>> generator = pipeline('text-generation', model='gpt2-large')
44 >>> set_seed(42)
45 >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
46
47 [{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
48 {'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
49 {'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
50 {'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
51 {'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
52 ```
53
54 Here is how to use this model to get the features of a given text in PyTorch:
55
56 ```python
57 from transformers import GPT2Tokenizer, GPT2Model
58 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
59 model = GPT2Model.from_pretrained('gpt2-large')
60 text = "Replace me by any text you'd like."
61 encoded_input = tokenizer(text, return_tensors='pt')
62 output = model(**encoded_input)
63 ```
64
65 and in TensorFlow:
66
67 ```python
68 from transformers import GPT2Tokenizer, TFGPT2Model
69 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
70 model = TFGPT2Model.from_pretrained('gpt2-large')
71 text = "Replace me by any text you'd like."
72 encoded_input = tokenizer(text, return_tensors='tf')
73 output = model(encoded_input)
74 ```
75
76 ## Uses
77
78 #### Direct Use
79
80 In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
81
82 > The primary intended users of these models are AI researchers and practitioners.
83 >
84 > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
85
86 #### Downstream Use
87
88 In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
89
90 > Here are some secondary use cases we believe are likely:
91 >
92 > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
93 > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
94 > - Entertainment: Creation of games, chat bots, and amusing generations.
95
96 #### Misuse and Out-of-scope Use
97
98 In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
99
100 > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
101 >
102 > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
103
104 ## Risks, Limitations and Biases
105
106 **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
107
108 Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
109
110 The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
111
112 ```python
113 >>> from transformers import pipeline, set_seed
114 >>> generator = pipeline('text-generation', model='gpt2-large')
115 >>> set_seed(42)
116 >>> generator("The man worked as a", max_length=10, num_return_sequences=5)
117
118 [{'generated_text': 'The man worked as a security guard in a hotel'},
119 {'generated_text': 'The man worked as a salesman in Mexico and in'},
120 {'generated_text': 'The man worked as a supervisor at the warehouse for'},
121 {'generated_text': "The man worked as a cleaner for the store's"},
122 {'generated_text': 'The man worked as a barbershop apprentice.'}]
123
124 >>> set_seed(42)
125 >>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
126
127 [{'generated_text': 'The woman worked as a clerk at the bank.'},
128 {'generated_text': 'The woman worked as a caregiver, and her'},
129 {'generated_text': 'The woman worked as a customer service agent for a'},
130 {'generated_text': 'The woman worked as a cleaner at the store,'},
131 {'generated_text': 'The woman worked as a barista and was "'}]
132 ```
133
134 This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
135
136 ## Training
137
138 #### Training Data
139
140 The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
141 pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
142 this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
143 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
144 [here](https://github.com/openai/gpt-2/blob/master/domains.txt).
145
146 #### Training Procedure
147
148 The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
149 means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
150 of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
151 it was trained to guess the next word in sentences.
152
153 More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
154 shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
155 predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
156
157 This way, the model learns an inner representation of the English language that can then be used to extract features
158 useful for downstream tasks.
159
160 The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
161 vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
162
163 ## Evaluation
164
165 The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
166
167 #### Testing Data, Factors and Metrics
168
169 The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
170
171 > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
172
173 #### Results
174
175 The model achieves the following results without any fine-tuning (zero-shot):
176
177 | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
178 |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
179 | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
180 | | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
181
182 ## Environmental Impact
183
184 Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
185
186 - **Hardware Type:** Unknown
187 - **Hours used:** Unknown
188 - **Cloud Provider:** Unknown
189 - **Compute Region:** Unknown
190 - **Carbon Emitted:** Unknown
191
192 ## Technical Specifications
193
194 See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
195
196 ## Citation Information
197
198 ```bibtex
199 @article{radford2019language,
200 title={Language models are unsupervised multitask learners},
201 author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
202 journal={OpenAI blog},
203 volume={1},
204 number={8},
205 pages={9},
206 year={2019}
207 }
208 ```
209
210 ## Model Card Authors
211
212 This model card was written by the Hugging Face team.