Skip to content

Do not generate C code for BatchedDot when BLAS flags are missing #550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

ricardoV94
Copy link
Member

@ricardoV94 ricardoV94 commented Dec 13, 2023

Related Issue

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

@ricardoV94 ricardoV94 added bug Something isn't working C-backend graph rewriting labels Dec 13, 2023
@ricardoV94 ricardoV94 requested a review from twiecki December 13, 2023 08:58
@ricardoV94 ricardoV94 force-pushed the fix_batched_dot_missing_blas_flags branch 3 times, most recently from 5ab17a3 to 8a1644b Compare December 13, 2023 10:59
@codecov-commenter
Copy link

codecov-commenter commented Dec 13, 2023

Codecov Report

Merging #550 (3ac9efc) into main (c38eea0) will increase coverage by 0.00%.
Report is 2 commits behind head on main.
The diff coverage is 94.35%.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main     #550   +/-   ##
=======================================
  Coverage   80.90%   80.90%           
=======================================
  Files         162      162           
  Lines       46416    46418    +2     
  Branches    11354    11355    +1     
=======================================
+ Hits        37554    37556    +2     
  Misses       6639     6639           
  Partials     2223     2223           
Files Coverage Δ
pytensor/breakpoint.py 55.00% <ø> (ø)
pytensor/compile/builders.py 77.17% <100.00%> (ø)
pytensor/graph/basic.py 89.08% <ø> (ø)
pytensor/graph/rewriting/kanren.py 100.00% <ø> (ø)
pytensor/ifelse.py 51.70% <100.00%> (ø)
pytensor/link/jax/dispatch/random.py 96.05% <100.00%> (ø)
pytensor/link/jax/dispatch/shape.py 87.09% <ø> (ø)
pytensor/link/jax/dispatch/subtensor.py 88.13% <ø> (ø)
pytensor/link/jax/dispatch/tensor_basic.py 91.96% <ø> (ø)
pytensor/link/numba/dispatch/random.py 98.83% <100.00%> (ø)
... and 36 more

@jessegrabowski
Copy link
Member

So if BLAS isn't installed and you try to compile a graph with a BatchedDot to C it just errors out?

@ricardoV94
Copy link
Member Author

ricardoV94 commented Dec 15, 2023

No, when C code of an Op raises not ImplementedError PyTensor switches to the Python implementation. That's the API contract.

I added a test as well and you can see the thunk is no longer a cthunk (it will be a pythunk) but it still runs

@jessegrabowski
Copy link
Member

Ah yeah I see that now!

@ricardoV94 ricardoV94 merged commit d607e23 into pymc-devs:main Dec 15, 2023
@ricardoV94 ricardoV94 deleted the fix_batched_dot_missing_blas_flags branch December 15, 2023 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working C-backend graph rewriting
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New optimization Matmul -> BatchedDot causing problems in systems not properly linked to BLAS
3 participants