You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
Copy file name to clipboardExpand all lines: docs/source/en/training/dreambooth.mdx
+27-20Lines changed: 27 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -50,6 +50,20 @@ from accelerate.utils import write_basic_config
50
50
write_basic_config()
51
51
```
52
52
53
+
Finally, download a [few images of a dog](https://huggingface.co/datasets/diffusers/dog-example) to DreamBooth with:
54
+
55
+
```py
56
+
from huggingface_hub import snapshot_download
57
+
58
+
local_dir ="./dog"
59
+
snapshot_download(
60
+
"diffusers/dog-example",
61
+
local_dir=local_dir,
62
+
repo_type="dataset",
63
+
ignore_patterns=".gitattributes",
64
+
)
65
+
```
66
+
53
67
## Finetuning
54
68
55
69
<Tipwarning={true}>
@@ -60,22 +74,13 @@ DreamBooth finetuning is very sensitive to hyperparameters and easy to overfit.
60
74
61
75
<frameworkcontent>
62
76
<pt>
63
-
Let's try DreamBooth with a
64
-
[few images of a dog](https://huggingface.co/datasets/diffusers/dog-example);
65
-
download and save them to a directory and then set the `INSTANCE_DIR` environment variable to that path:
77
+
Set the `INSTANCE_DIR` environment variable to the path of the directory containing the dog images.
66
78
67
-
```python
68
-
local_dir ="./path_to_training_images"
69
-
snapshot_download(
70
-
"diffusers/dog-example",
71
-
local_dir=local_dir, repo_type="dataset",
72
-
ignore_patterns=".gitattributes",
73
-
)
74
-
```
79
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
75
80
76
81
```bash
77
82
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
78
-
export INSTANCE_DIR="path_to_training_images"
83
+
export INSTANCE_DIR="./dog"
79
84
export OUTPUT_DIR="path_to_saved_model"
80
85
```
81
86
@@ -105,11 +110,13 @@ Before running the script, make sure you have the requirements installed:
105
110
pip install -U -r requirements.txt
106
111
```
107
112
113
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
114
+
108
115
Now you can launch the training script with the following command:
Copy file name to clipboardExpand all lines: docs/source/en/training/lora.mdx
+6-2Lines changed: 6 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,9 @@ Finetuning a model like Stable Diffusion, which has billions of parameters, can
52
52
53
53
Let's finetune [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon.
54
54
55
-
To start, make sure you have the `MODEL_NAME` and `DATASET_NAME` environment variables set. The `OUTPUT_DIR` and `HUB_MODEL_ID` variables are optional and specify where to save the model to on the Hub:
55
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument. You'll also need to set the `DATASET_NAME` environment variable to the name of the dataset you want to train on.
56
+
57
+
The `OUTPUT_DIR` and `HUB_MODEL_ID` variables are optional and specify where to save the model to on the Hub:
@@ -140,7 +142,9 @@ Load the LoRA weights from your finetuned model *on top of the base model weight
140
142
141
143
Let's finetune [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) with DreamBooth and LoRA with some 🐶 [dog images](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ). Download and save these images to a directory.
142
144
143
-
To start, make sure you have the `MODEL_NAME` and `INSTANCE_DIR` (path to directory containing images) environment variables set. The `OUTPUT_DIR` variables is optional and specifies where to save the model to on the Hub:
145
+
To start, specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument. You'll also need to set `INSTANCE_DIR` to the path of the directory containing the images.
146
+
147
+
The `OUTPUT_DIR` variables is optional and specifies where to save the model to on the Hub:
Copy file name to clipboardExpand all lines: docs/source/en/training/text2image.mdx
+5-1Lines changed: 5 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,9 @@ To load a checkpoint to resume training, pass the argument `--resume_from_checkp
72
72
73
73
<frameworkcontent>
74
74
<pt>
75
-
Launch the [PyTorch training script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) for a fine-tuning run on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset like this:
75
+
Launch the [PyTorch training script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) for a fine-tuning run on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset like this.
76
+
77
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
@@ -141,6 +143,8 @@ Before running the script, make sure you have the requirements installed:
141
143
pip install -U -r requirements_flax.txt
142
144
```
143
145
146
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
147
+
144
148
Now you can launch the [Flax training script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_flax.py) like this:
@@ -81,9 +81,20 @@ To resume training from a saved checkpoint, pass the following argument to the t
81
81
82
82
## Finetuning
83
83
84
-
For your training dataset, download these [images of a cat statue](https://drive.google.com/drive/folders/1fmJMs25nxS_rSNqS5hTcRdLem_YQXbq5) and store them in a directory.
84
+
For your training dataset, download these [images of a cat toy](https://huggingface.co/datasets/diffusers/cat_toy_example) and store them in a directory:
85
85
86
-
Set the `MODEL_NAME` environment variable to the model repository id, and the `DATA_DIR` environment variable to the path of the directory containing the images. Now you can launch the [training script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py):
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument, and the `DATA_DIR` environment variable to the path of the directory containing the images.
96
+
97
+
Now you can launch the [training script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py):
87
98
88
99
<Tip>
89
100
@@ -95,7 +106,7 @@ Set the `MODEL_NAME` environment variable to the model repository id, and the `D
@@ -121,11 +132,13 @@ Before you begin, make sure you install the Flax specific dependencies:
121
132
pip install -U -r requirements_flax.txt
122
133
```
123
134
135
+
Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path`] argument.
136
+
124
137
Then you can launch the [training script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py):
0 commit comments