-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Closed
Description
per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm') |
If I understand correctly, one needs to also set
torch.backends.quantized.engine = "fbgemm"
I tried to quantize a model without the missing step and got strange errors about certain operations not being supported on the FBGEMM backend. They go away with this step added.
cc @jerryzh168 @jianyuh @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen