Skip to content

FlowMatchEulerDiscreteScheduler Timesteps have Floating Point Errors #11749

Open
@JStyborski

Description

@JStyborski

Describe the bug

When using the index_for_timestep method from FlowMatchEulerDiscreteScheduler in scheduling_flow_match_euler_discrete.py, the self.timesteps variable after init maintains floating point values for timesteps (by default, 1000 to 1). Some timestep values have floating point discrepancies that cause index lookups using (self.timesteps == query_timestep).nonzero() to return empty. This causes errors when calling the index_for_timestep function or any other functions that call index_for_timestep when using specific query timesteps.

As a monkeypatch, I applied the following after instantiating the scheduler
noise_scheduler.timesteps = torch.round(noise_scheduler.timesteps).to(dtype=torch.long)

Reproduction

from diffusers import FlowMatchEulerDiscreteScheduler
import torch

noise_scheduler = FlowMatchEulerDiscreteScheduler()
print(noise_scheduler.timesteps[746].item())
\# prints 254.00001525878906

query_timestep = torch.tensor([254]).to(dtype=torch.long)
print((noise_scheduler.timesteps == query_timestep).nonzero())
\# prints tensor([], size=(0, 1), dtype=torch.int64)

Logs

System Info

  • Note, I found this bug on both Windows 11 and Linux

Windows:

  • 🤗 Diffusers version: 0.33.1
  • Platform: Windows-11-10.0.22000-SP0
  • Running on Google Colab?: No
  • Python version: 3.12.8
  • PyTorch version (GPU?): 2.6.0+cu124 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.33.0
  • Transformers version: 4.48.1
  • Accelerate version: 1.3.0
  • PEFT version: 0.14.0
  • Bitsandbytes version: not installed
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: NVIDIA GeForce GTX 1080 Ti, 11264 MiB
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Linux:

  • 🤗 Diffusers version: 0.33.1
  • Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.12.8
  • PyTorch version (GPU?): 2.6.0+cu124 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.33.0
  • Transformers version: 4.48.1
  • Accelerate version: 1.3.0
  • PEFT version: 0.14.0
  • Bitsandbytes version: not installed
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
    Quadro RTX 6000, 24576 MiB
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Who can help?

@yiyixuxu

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions