README.md
14.8 KB · 219 lines · markdown Raw
1 ---
2 license: creativeml-openrail-m
3 tags:
4 - stable-diffusion
5 - stable-diffusion-diffusers
6 - text-to-image
7 inference: false
8 library_name: diffusers
9 ---
10
11 # Stable Diffusion Inpainting model card
12
13 ### ⚠️ This repository is a mirror of the now deprecated `ruwnayml/stable-diffusion-inpainting`, this repository or oganization are not affiliated in any way with RunwayML.
14 Modifications to the original model card are in <span style="color:crimson">red</span> or <span style="color:darkgreen">green</span>
15
16
17 Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.
18
19 The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
20
21 [Open In Spaces](https://huggingface.co/spaces/sd-legacy/stable-diffusion-inpainting) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
22 :-------------------------:|:-------------------------:|
23 ## Examples:
24
25 You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion) (<span style="color:crimson">now deprecated</span>), <span style="color:darkgreen">Automatic1111</span>.
26
27 ### Use with Diffusers
28
29 ```python
30 from diffusers import StableDiffusionInpaintPipeline
31
32 pipe = StableDiffusionInpaintPipeline.from_pretrained(
33 "sd-legacy/stable-diffusion-inpainting",
34 revision="fp16",
35 torch_dtype=torch.float16,
36 )
37 prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
38 #image and mask_image should be PIL images.
39 #The mask structure is white for inpainting and black for keeping as is
40 image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0]
41 image.save("./yellow_cat_on_park_bench.png")
42 ```
43
44 **How it works:**
45 `image` | `mask_image`
46 :-------------------------:|:-------------------------:|
47 <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/>
48
49
50 `prompt` | `Output`
51 :-------------------------:|:-------------------------:|
52 <span style="position: relative;bottom: 150px;">Face of a yellow cat, high resolution, sitting on a park bench</span> | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="300"/>
53
54 ### Use with Original GitHub Repository <span style="color:darkgreen">or AUTOMATIC1111</span>
55
56 1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/sd-legacy/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt)
57 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) (<span style="color:crimson">now deprecated</span>).
58 3. <span style="color:darkgreen">Use it with <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">AUTOMATIC1111</a></span>
59
60 ## Model Details
61 - **Developed by:** Robin Rombach, Patrick Esser
62 - **Model type:** Diffusion-based text-to-image generation model
63 - **Language(s):** English
64 - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
65 - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
66 - **Resources for more information:** [GitHub Repository](https://github.com/runwayml/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
67 - **Cite as:**
68
69 @InProceedings{Rombach_2022_CVPR,
70 author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
71 title = {High-Resolution Image Synthesis With Latent Diffusion Models},
72 booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
73 month = {June},
74 year = {2022},
75 pages = {10684-10695}
76 }
77
78 # Uses
79
80 ## Direct Use
81 The model is intended for research purposes only. Possible research areas and
82 tasks include
83
84 - Safe deployment of models which have the potential to generate harmful content.
85 - Probing and understanding the limitations and biases of generative models.
86 - Generation of artworks and use in design and other artistic processes.
87 - Applications in educational or creative tools.
88 - Research on generative models.
89
90 Excluded uses are described below.
91
92 ### Misuse, Malicious Use, and Out-of-Scope Use
93 _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
94
95
96 The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
97 #### Out-of-Scope Use
98 The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
99 #### Misuse and Malicious Use
100 Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
101
102 - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
103 - Intentionally promoting or propagating discriminatory content or harmful stereotypes.
104 - Impersonating individuals without their consent.
105 - Sexual content without consent of the people who might see it.
106 - Mis- and disinformation
107 - Representations of egregious violence and gore
108 - Sharing of copyrighted or licensed material in violation of its terms of use.
109 - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
110
111 ## Limitations and Bias
112
113 ### Limitations
114
115 - The model does not achieve perfect photorealism
116 - The model cannot render legible text
117 - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
118 - Faces and people in general may not be generated properly.
119 - The model was trained mainly with English captions and will not work as well in other languages.
120 - The autoencoding part of the model is lossy
121 - The model was trained on a large-scale dataset
122 [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
123 and is not fit for product use without additional safety mechanisms and
124 considerations.
125 - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
126 The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
127
128 ### Bias
129 While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
130 Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
131 which consists of images that are primarily limited to English descriptions.
132 Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
133 This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
134 ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
135
136
137 ## Training
138
139 **Training Data**
140 The model developers used the following dataset for training the model:
141
142 - LAION-2B (en) and subsets thereof (see next section)
143
144 **Training Procedure**
145 Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
146
147 - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
148 - Text prompts are encoded through a ViT-L/14 text-encoder.
149 - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
150 - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
151
152 We currently provide six checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, `sd-v1-4.ckpt`, `sd-v1-5.ckpt` and `sd-v1-5-inpainting.ckpt`
153 which were trained as follows,
154
155 - `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
156 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
157 - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
158 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
159 filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
160 - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
161 - `sd-v1-4.ckpt`: Resumed from stable-diffusion-v1-2.225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
162 - `sd-v1-5.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
163 - `sd-v1-5-inpaint.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
164
165
166 - **Hardware:** 32 x 8 x A100 GPUs
167 - **Optimizer:** AdamW
168 - **Gradient Accumulations**: 2
169 - **Batch:** 32 x 8 x 2 x 4 = 2048
170 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
171
172 ## Evaluation Results
173 Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
174 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
175 steps show the relative improvements of the checkpoints:
176
177 ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png)
178
179 Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
180
181 ## Inpainting Evaluation
182 To assess the performance of the inpainting model, we used the same evaluation
183 protocol as in our [LDM paper](https://arxiv.org/abs/2112.10752). Since the
184 Stable Diffusion Inpainting Model acccepts a text input, we simply used a fixed
185 prompt of `photograph of a beautiful empty scene, highest quality settings`.
186
187 | Model | FID | LPIPS |
188 |-----------------------------|------|------------------|
189 | Stable Diffusion Inpainting | 1.00 | 0.141 (+- 0.082) |
190 | Latent Diffusion Inpainting | 1.50 | 0.137 (+- 0.080) |
191 | CoModGAN | 1.82 | 0.15 |
192 | LaMa | 2.21 | 0.134 (+- 0.080) |
193
194 ## Environmental Impact
195
196 **Stable Diffusion v1** **Estimated Emissions**
197 Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
198
199 - **Hardware Type:** A100 PCIe 40GB
200 - **Hours used:** 150000
201 - **Cloud Provider:** AWS
202 - **Compute Region:** US-east
203 - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
204
205
206 ## Citation
207
208 ```bibtex
209 @InProceedings{Rombach_2022_CVPR,
210 author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
211 title = {High-Resolution Image Synthesis With Latent Diffusion Models},
212 booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
213 month = {June},
214 year = {2022},
215 pages = {10684-10695}
216 }
217 ```
218
219 *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*