README.md
7.7 KB · 170 lines · markdown Raw
1 ---
2 language:
3 - en
4 - fr
5 - ro
6 - de
7 - multilingual
8 license: apache-2.0
9 tags:
10 - summarization
11 - translation
12 datasets:
13 - c4
14 ---
15
16 # Model Card for T5-3B
17
18 ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
19
20 # Table of Contents
21
22 1. [Model Details](#model-details)
23 2. [Uses](#uses)
24 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
25 4. [Training Details](#training-details)
26 5. [Evaluation](#evaluation)
27 6. [Environmental Impact](#environmental-impact)
28 7. [Citation](#citation)
29 8. [Model Card Authors](#model-card-authors)
30 9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
31
32 # Model Details
33
34 ## Model Description
35
36 The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
37
38 > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
39
40 T5-3B is the checkpoint with 3 billion parameters.
41
42 - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
43 - **Model type:** Language model
44 - **Language(s) (NLP):** English, French, Romanian, German
45 - **License:** Apache 2.0
46 - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
47 - **Resources for more information:**
48 - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
49 - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
50 - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
51 - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
52
53 # Uses
54
55 ## Direct Use and Downstream Use
56
57 The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
58
59 > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
60
61 See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
62
63 ## Out-of-Scope Use
64
65 More information needed.
66
67 # Bias, Risks, and Limitations
68
69 More information needed.
70
71 ## Recommendations
72
73 More information needed.
74
75 # Training Details
76
77 ## Training Data
78
79 The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
80
81 The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
82 Thereby, the following datasets were being used for (1.) and (2.):
83
84 1. **Datasets used for Unsupervised denoising objective**:
85
86 - [C4](https://huggingface.co/datasets/c4)
87 - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
88
89
90 2. **Datasets used for Supervised text-to-text language modeling objective**
91
92 - Sentence acceptability judgment
93 - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
94 - Sentiment analysis
95 - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
96 - Paraphrasing/sentence similarity
97 - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
98 - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
99 - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
100 - Natural language inference
101 - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
102 - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
103 - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
104 - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
105 - Sentence completion
106 - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
107 - Word sense disambiguation
108 - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
109 - Question answering
110 - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
111 - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
112 - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
113
114 ## Training Procedure
115
116 In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
117
118 > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
119
120 The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
121
122 # Evaluation
123
124 ## Testing Data, Factors & Metrics
125
126 The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
127
128 ## Results
129
130 For full results for T5-3B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
131
132 # Environmental Impact
133
134 Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
135
136 - **Hardware Type:** Google Cloud TPU Pods
137 - **Hours used:** More information needed
138 - **Cloud Provider:** GCP
139 - **Compute Region:** More information needed
140 - **Carbon Emitted:** More information needed
141
142 # Citation
143
144 **BibTeX:**
145
146 ```bibtex
147 @article{2020t5,
148 author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
149 title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
150 journal = {Journal of Machine Learning Research},
151 year = {2020},
152 volume = {21},
153 number = {140},
154 pages = {1-67},
155 url = {http://jmlr.org/papers/v21/20-074.html}
156 }
157 ```
158
159 **APA:**
160 - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
161
162 # Model Card Authors
163
164 This model card was written by the team at Hugging Face.
165
166 # How to Get Started with the Model
167
168 See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context on how to get started with this checkpoint.
169
170