Skip to content

StableDiffusionPipeline.from_ckpt is not working on dev version #3450

@takuma104

Description

@takuma104

Describe the bug

It seems that from_ckpt is not working properly in the latest main. It seems to be fine in release version v0.16.1. I tried to follow some of the changes, but couldn't track them all.

diff: 9b14ce3...main

Reproduction

from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_ckpt(
    'https://huggingface.co/gsdf/Counterfeit-V3.0/blob/main/Counterfeit-V3.0_fp16.safetensors', 
)

Logs

RuntimeError: Error(s) in loading state_dict for AutoencoderKL:
	Missing key(s) in state_dict: "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias". 
	Unexpected key(s) in state_dict: "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight". 


### System Info

- `diffusers` version: 0.17.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- PyTorch version (GPU RTX3090): 2.0.0+cu117 (True)
- Huggingface_hub version: 0.13.2
- Transformers version: 4.25.1
- Accelerate version: 0.19.0.dev0
- xFormers version: 0.0.17+c36468d.d20230318
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions