README.md
| 1 | --- |
| 2 | language: en |
| 3 | tags: |
| 4 | - exbert |
| 5 | |
| 6 | license: mit |
| 7 | --- |
| 8 | |
| 9 | |
| 10 | # GPT-2 |
| 11 | |
| 12 | Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large |
| 13 | |
| 14 | Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in |
| 15 | [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) |
| 16 | and first released at [this page](https://openai.com/blog/better-language-models/). |
| 17 | |
| 18 | Disclaimer: The team releasing GPT-2 also wrote a |
| 19 | [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card |
| 20 | has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. |
| 21 | |
| 22 | ## Model description |
| 23 | |
| 24 | GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This |
| 25 | means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots |
| 26 | of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, |
| 27 | it was trained to guess the next word in sentences. |
| 28 | |
| 29 | More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, |
| 30 | shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the |
| 31 | predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. |
| 32 | |
| 33 | This way, the model learns an inner representation of the English language that can then be used to extract features |
| 34 | useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a |
| 35 | prompt. |
| 36 | |
| 37 | This is the **smallest** version of GPT-2, with 124M parameters. |
| 38 | |
| 39 | **Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl) |
| 40 | |
| 41 | ## Intended uses & limitations |
| 42 | |
| 43 | You can use the raw model for text generation or fine-tune it to a downstream task. See the |
| 44 | [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. |
| 45 | |
| 46 | ### How to use |
| 47 | |
| 48 | You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we |
| 49 | set a seed for reproducibility: |
| 50 | |
| 51 | ```python |
| 52 | >>> from transformers import pipeline, set_seed |
| 53 | >>> generator = pipeline('text-generation', model='gpt2') |
| 54 | >>> set_seed(42) |
| 55 | >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) |
| 56 | |
| 57 | [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, |
| 58 | {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, |
| 59 | {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, |
| 60 | {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, |
| 61 | {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] |
| 62 | ``` |
| 63 | |
| 64 | Here is how to use this model to get the features of a given text in PyTorch: |
| 65 | |
| 66 | ```python |
| 67 | from transformers import GPT2Tokenizer, GPT2Model |
| 68 | tokenizer = GPT2Tokenizer.from_pretrained('gpt2') |
| 69 | model = GPT2Model.from_pretrained('gpt2') |
| 70 | text = "Replace me by any text you'd like." |
| 71 | encoded_input = tokenizer(text, return_tensors='pt') |
| 72 | output = model(**encoded_input) |
| 73 | ``` |
| 74 | |
| 75 | and in TensorFlow: |
| 76 | |
| 77 | ```python |
| 78 | from transformers import GPT2Tokenizer, TFGPT2Model |
| 79 | tokenizer = GPT2Tokenizer.from_pretrained('gpt2') |
| 80 | model = TFGPT2Model.from_pretrained('gpt2') |
| 81 | text = "Replace me by any text you'd like." |
| 82 | encoded_input = tokenizer(text, return_tensors='tf') |
| 83 | output = model(encoded_input) |
| 84 | ``` |
| 85 | |
| 86 | ### Limitations and bias |
| 87 | |
| 88 | The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of |
| 89 | unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their |
| 90 | [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): |
| 91 | |
| 92 | > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases |
| 93 | > that require the generated text to be true. |
| 94 | > |
| 95 | > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do |
| 96 | > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a |
| 97 | > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, |
| 98 | > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar |
| 99 | > levels of caution around use cases that are sensitive to biases around human attributes. |
| 100 | |
| 101 | Here's an example of how the model can have biased predictions: |
| 102 | |
| 103 | ```python |
| 104 | >>> from transformers import pipeline, set_seed |
| 105 | >>> generator = pipeline('text-generation', model='gpt2') |
| 106 | >>> set_seed(42) |
| 107 | >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) |
| 108 | |
| 109 | [{'generated_text': 'The White man worked as a mannequin for'}, |
| 110 | {'generated_text': 'The White man worked as a maniser of the'}, |
| 111 | {'generated_text': 'The White man worked as a bus conductor by day'}, |
| 112 | {'generated_text': 'The White man worked as a plumber at the'}, |
| 113 | {'generated_text': 'The White man worked as a journalist. He had'}] |
| 114 | |
| 115 | >>> set_seed(42) |
| 116 | >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) |
| 117 | |
| 118 | [{'generated_text': 'The Black man worked as a man at a restaurant'}, |
| 119 | {'generated_text': 'The Black man worked as a car salesman in a'}, |
| 120 | {'generated_text': 'The Black man worked as a police sergeant at the'}, |
| 121 | {'generated_text': 'The Black man worked as a man-eating monster'}, |
| 122 | {'generated_text': 'The Black man worked as a slave, and was'}] |
| 123 | ``` |
| 124 | |
| 125 | This bias will also affect all fine-tuned versions of this model. |
| 126 | |
| 127 | ## Training data |
| 128 | |
| 129 | The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web |
| 130 | pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from |
| 131 | this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights |
| 132 | 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText |
| 133 | [here](https://github.com/openai/gpt-2/blob/master/domains.txt). |
| 134 | |
| 135 | ## Training procedure |
| 136 | |
| 137 | ### Preprocessing |
| 138 | |
| 139 | The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a |
| 140 | vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. |
| 141 | |
| 142 | The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact |
| 143 | details of training. |
| 144 | |
| 145 | ## Evaluation results |
| 146 | |
| 147 | The model achieves the following results without any fine-tuning (zero-shot): |
| 148 | |
| 149 | | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |
| 150 | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| |
| 151 | | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | |
| 152 | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | |
| 153 | |
| 154 | |
| 155 | ### BibTeX entry and citation info |
| 156 | |
| 157 | ```bibtex |
| 158 | @article{radford2019language, |
| 159 | title={Language Models are Unsupervised Multitask Learners}, |
| 160 | author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, |
| 161 | year={2019} |
| 162 | } |
| 163 | ``` |
| 164 | |
| 165 | <a href="https://huggingface.co/exbert/?model=gpt2"> |
| 166 | <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> |
| 167 | </a> |
| 168 | |