Skip to content

Recognize cast data in InferenceData #5646

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 29, 2022

Conversation

zaxtax
Copy link
Contributor

@zaxtax zaxtax commented Mar 24, 2022

This is adds a cast check to extracting_obs_data fixing #5586

@codecov
Copy link

codecov bot commented Mar 24, 2022

Codecov Report

Merging #5646 (b17ba41) into main (d322b81) will decrease coverage by 2.75%.
The diff coverage is 100.00%.

❗ Current head b17ba41 differs from pull request most recent head 202d8a0. Consider uploading reports for the commit 202d8a0 to get more accurate results

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #5646      +/-   ##
==========================================
- Coverage   88.57%   85.81%   -2.76%     
==========================================
  Files          75       75              
  Lines       13677    13719      +42     
==========================================
- Hits        12114    11773     -341     
- Misses       1563     1946     +383     
Impacted Files Coverage Δ
pymc/aesaraf.py 87.56% <100.00%> (-3.65%) ⬇️
pymc/sampling_jax.py 0.00% <0.00%> (-96.97%) ⬇️
pymc/distributions/transforms.py 59.63% <0.00%> (-40.37%) ⬇️
pymc/distributions/dist_math.py 57.71% <0.00%> (-29.72%) ⬇️
pymc/distributions/logprob.py 69.72% <0.00%> (-26.61%) ⬇️
pymc/data.py 63.02% <0.00%> (-20.59%) ⬇️
pymc/parallel_sampling.py 86.04% <0.00%> (-0.67%) ⬇️
pymc/model.py 85.89% <0.00%> (-0.14%) ⬇️
pymc/distributions/multivariate.py 92.15% <0.00%> (-0.03%) ⬇️
... and 3 more

@zaxtax
Copy link
Contributor Author

zaxtax commented Mar 24, 2022

I think some of these failing tests might have bugs. For example, should observed variables be in "prior" as they are assumed by some tests. Or into "observed_data" like happens after this change?

@ricardoV94
Copy link
Member

I think some of these failing tests might have bugs. For example, should observed variables be in "prior" as they are assumed by some tests. Or into "observed_data" like happens after this change?

Can you link to the specific tests that are failing?

@zaxtax
Copy link
Contributor Author

zaxtax commented Mar 25, 2022

I think some of these failing tests might have bugs. For example, should observed variables be in "prior" as they are assumed by some tests. Or into "observed_data" like happens after this change?

Can you link to the specific tests that are failing?

pymc/tests/test_sampling.py::TestSamplePriorPredictive::test_shared
https://github.com/pymc-devs/pymc/blob/main/pymc/tests/test_sampling.py#L1070

pymc/tests/test_step.py::TestStepMethods::test_step_continuous
https://github.com/pymc-devs/pymc/blob/main/pymc/tests/test_step.py#L91

I have no idea why the second test fails. The first one does since gen1.prior doesn't have have the observation y after this commit. That instead is in observed_data or prior_predictive. This test fails even if all discrete variables in the model are made continuous.

@ricardoV94
Copy link
Member

ricardoV94 commented Mar 28, 2022

Thanks @zaxtax

It seems that the first test just needs to be tweaked for the new (correct) expectation.

For the second test, are you sure it is not just flaky? If not, can you narrow down which of the steppers is failing?

@zaxtax
Copy link
Contributor Author

zaxtax commented Mar 28, 2022

Thanks @zaxtax

It seems that the first test just needs to be tweaked for the new (correct) expectation.

For the second test, are you sure it is not just flaky? If not, can you narrow down which of the steppers is failing?

I think it's just flakey. I can make it fail less often if use more tuning steps. I don't know what it's suppose to test other than that all the step methods work.

@ricardoV94
Copy link
Member

ricardoV94 commented Mar 28, 2022

I think it's just flakey. I can make it fail less often if use more tuning steps. I don't know what it's suppose to test other than that all the step methods work.

We can just open an issue about the newly observed flakiness and we can ignore it for this PR. Either it is something that already happened or something subtle changed and should be addressed in a separate PR anyway. There is no reason why these changes here should affect sampling, as that happens before any inferencedata conversion.

By the way do you see it failing in specific step methods? And do you mind opening that issue?

@zaxtax
Copy link
Contributor Author

zaxtax commented Mar 28, 2022

I think it's just flakey. I can make it fail less often if use more tuning steps. I don't know what it's suppose to test other than that all the step methods work.

We can just open an issue about the newly observed flakiness and we can ignore it for this PR. Either it is something that already happened or something subtle changed and should be addressed in a separate PR anyway. There is no reason why these changes here should affect sampling, as that happens before any inferencedata conversion.

By the way do you see it failing in specific step methods? And do you mind opening that issue?

I'm going to make the test a bit more robust and open another PR for the observed flakiness.

@zaxtax zaxtax force-pushed the extract_obs_data-cast_fix branch from 01f061b to b17ba41 Compare March 28, 2022 17:01
@ricardoV94 ricardoV94 changed the title Fix casting bug Recognize cast data in InferenceData Mar 29, 2022
@@ -117,11 +117,11 @@ def test_step_continuous(self):
for step in steps:
idata = sample(
0,
tune=8000,
tune=12000,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not include this change in this PR. This test is quite lengthy already, and it's better to let it fail now and then to motivate someone to properly fix it, instead of monkey-patching it every time it fails in unrelated PRs.

Otherwise your PR is ready to merge

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I dropped those commits for now.

@zaxtax zaxtax force-pushed the extract_obs_data-cast_fix branch from b17ba41 to 202d8a0 Compare March 29, 2022 09:49
@zaxtax
Copy link
Contributor Author

zaxtax commented Mar 29, 2022

@ricardoV94 ooo and it passes all the checks (for now)

Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks for the help @zaxtax!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants