Open
Description
🐛 Describe the bug
The following model fails to load at runtime. This is presumably due to not taking inputs? When changing the model to x + torch.randn(...), it loads successfully. This isn't a particularly high-priority case, but does appear to be a bug.
Repro:
import torch
from executorch.backends.apple.coreml.partition import CoreMLPartitioner
from executorch.exir import to_edge_transform_and_lower, EdgeCompileConfig, to_edge
from executorch.extension.pybindings.portable_lib import _load_for_executorch_from_buffer
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self):
return torch.randn(5, 5)
model = Model()
inputs = (
)
print(inputs)
eager_outputs = model(*inputs)
print(f"Eager: {eager_outputs.shape} {eager_outputs}")
ep = torch.export.export(model.eval(), inputs)
print(ep)
print(f"EP: {ep.module()(*inputs)}")
lowered = to_edge_transform_and_lower(
ep,
partitioner=[CoreMLPartitioner()],
compile_config=EdgeCompileConfig(_check_ir_validity=False)
).to_executorch()
print(lowered.exported_program())
et_model = _load_for_executorch_from_buffer(lowered.buffer)
et_outputs = et_model([*inputs])[0]
print(et_outputs)
et_outputs - eager_outputs
Outputs:
[ETCoreMLModelManager.mm:566] [Core ML] Metadata is invalid or missing.
[backend_delegate.mm:288] [Core ML] Model init failed Metadata is invalid or missing.
[coreml_backend_delegate.mm:193] CoreMLBackend: Failed to init the model.
[method.cpp:113] Init failed for backend CoreMLBackend: 0x23
Versions
coremltools version 8.3
executorch commit 67b6009 (Jun 14)