You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
💡 If you want to increase the trainable capacity, you can associate your placeholder token, *e.g.*`<cat-toy>` to
129
+
multiple embedding vectors. This can help the model to better capture the style of more (complex) images.
130
+
To enable training multiple embedding vectors, simply pass:
131
+
132
+
```bash
133
+
--num_vectors=5
134
+
```
135
+
136
+
</Tip>
125
137
</pt>
126
138
<jax>
127
139
If you have access to TPUs, try out the [Flax training script](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion_flax.py) to train even faster (this'll also work for GPUs). With the same configuration settings, the Flax training script should be at least 70% faster than the PyTorch training script! ⚡️
Copy file name to clipboardExpand all lines: examples/research_projects/mulit_token_textual_inversion/README.md
+4-1Lines changed: 4 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,7 @@
1
-
## Multi Token Textual Inversion
1
+
## [Deprecated] Multi Token Textual Inversion
2
+
3
+
**IMPORTART: This research project is deprecated. Multi Token Textual Inversion is now supported natively in [the officail textual inversion example](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion#running-locally-with-pytorch).**
4
+
2
5
The author of this project is [Isamu Isozaki](https://github.com/isamu-isozaki) - please make sure to tag the author for issue and PRs as well as @patrickvonplaten.
3
6
4
7
We add multi token support to textual inversion. I added
A full training run takes ~1 hour on one V100 GPU.
85
84
85
+
**Note**: As described in [the official paper](https://arxiv.org/abs/2208.01618)
86
+
only one embedding vector is used for the placeholder token, *e.g.*`"<cat-toy>"`.
87
+
However, one can also add multiple embedding vectors for the placeholder token
88
+
to inclease the number of fine-tuneable parameters. This can help the model to learn
89
+
more complex details. To use multiple embedding vectors, you can should define `--num_vectors`
90
+
to a number larger than one, *e.g.*:
91
+
```
92
+
--num_vectors 5
93
+
```
94
+
95
+
The saved textual inversion vectors will then be larger in size compared to the default case.
96
+
86
97
### Inference
87
98
88
99
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt.
0 commit comments