Open
Description
I am trying to use lazy tensor core backend for some of my tests. To use LTC backend I am supposed to build torch-mlir with TORCH_MLIR_ENABLE_LTC flag turned ON. I am using the following CMake build flags:
cmake -GNinja -Bbuild \
`# Enables "--debug" and "--debug-only" flags for the "torch-mlir-opt" tool` \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DLLVM_ENABLE_ASSERTIONS=ON \
-DPython3_FIND_VIRTUALENV=ONLY \
-DMLIR_ENABLE_BINDINGS_PYTHON=ON \
-DLLVM_TARGETS_TO_BUILD=host \
`# For building LLVM "in-tree"` \
externals/llvm-project/llvm \
-DLLVM_ENABLE_PROJECTS=mlir \
-DLLVM_EXTERNAL_PROJECTS="torch-mlir" \
-DLLVM_EXTERNAL_TORCH_MLIR_SOURCE_DIR="$PWD" \
-DLLVM_CCACHE_BUILD=ON \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DTORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS=ON \
-DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON \
-DTORCH_MLIR_ENABLE_LTC=ON
This is what I get by building torch-mlir in-tree -
[967/5829] Generating Lazy Tensor Core IR Nodes
FAILED: tools/torch-mlir/generated_backend.hash tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/LazyNativeFunctions.cpp tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/RegisterLazy.cpp tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/shape_inference.cpp /home/zahidw/torch-mlir/build/tools/torch-mlir/generated_backend.hash /home/zahidw/torch-mlir/build/tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/LazyNativeFunctions.cpp /home/zahidw/torch-mlir/build/tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/RegisterLazy.cpp /home/zahidw/torch-mlir/build/tools/torch-mlir/projects/ltc/csrc/base_lazy_backend/generated/shape_inference.cpp
cd /home/zahidw/torch-mlir/build/tools/torch-mlir/projects/ltc/csrc/base_lazy_backend && /home/zahidw/Envs/mlir_torch/bin/python3.12 /home/zahidw/torch-mlir/build_tools/autogen_ltc_backend.py -b /home/zahidw/torch-mlir/build/tools/torch-mlir
WARNING:root:Could not find `ts_native_functions.yaml` at /home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/aten/src/ATen/native/ts_native_functions.yaml
Traceback (most recent call last):
File "/home/zahidw/torch-mlir/build_tools/autogen_ltc_backend.py", line 556, in <module>
main(args)
File "/home/zahidw/torch-mlir/build_tools/autogen_ltc_backend.py", line 517, in main
generator()
File "/home/zahidw/torch-mlir/build_tools/autogen_ltc_backend.py", line 502, in __call__
self.generate_backend()
File "/home/zahidw/torch-mlir/build_tools/autogen_ltc_backend.py", line 483, in generate_backend
torchgen.gen_lazy_tensor.run_gen_lazy_tensor(
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/gen_lazy_tensor.py", line 468, in run_gen_lazy_tensor
fm.write_with_template(
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/utils.py", line 189, in write_with_template
substitute_out = self.substitute_with_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/utils.py", line 150, in substitute_with_template
env = env_callable()
^^^^^^^^^^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/gen_lazy_tensor.py", line 501, in <lambda>
"native_function_definitions": list(
^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/gen_lazy_tensor.py", line 387, in concat_map_codegen
yield from func(f)
^^^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/context.py", line 96, in wrapper
return func(slf, f)
^^^^^^^^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/dest/lazy_ir.py", line 630, in __call__
{self.get_device(func, schema)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/dest/lazy_ir.py", line 484, in get_device
assert len(value_types_names) > 0 or len(optional_devices) > 0, (
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Expected at least one Value or Device type
in native_functions.yaml line /home/zahidw/Envs/mlir_torch/lib/python3.12/site-packages/torchgen/packaged/ATen/native/native_functions.yaml:178:
_assert_scalar(Scalar self, str assert_msg) -> ()
[980/5829] Building X86GenInstrInfo.inc...
ninja: build stopped: subcommand failed.
- Branch: main
- Linux OS: Ubuntu 24.04
- Environment packages installed:
cfgv==3.4.0
cmake==4.0.2
dill==0.4.0
distlib==0.3.9
filelock==3.18.0
fsspec==2025.3.2
identify==2.6.10
Jinja2==3.1.6
MarkupSafe==3.0.2
mpmath==1.3.0
multiprocess==0.70.18
nanobind==2.7.0
networkx==3.5rc0
ninja==1.11.1.4
nodeenv==1.9.1
numpy==2.2.6
onnx==1.16.1
packaging==25.0
pillow==11.2.1
platformdirs==4.3.8
pre_commit==4.2.0
protobuf==6.31.0
pybind11==2.13.6
PyYAML==6.0.2
setuptools==80.7.1
sympy==1.14.0
torch==2.8.0.dev20250512+cpu
torchvision==0.22.0.dev20250512+cpu
typing_extensions==4.13.2
virtualenv==20.31.2
wheel==0.45.1
Metadata
Metadata
Assignees
Labels
No labels