-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Recognize cast data in InferenceData #5646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5646 +/- ##
==========================================
- Coverage 88.57% 85.81% -2.76%
==========================================
Files 75 75
Lines 13677 13719 +42
==========================================
- Hits 12114 11773 -341
- Misses 1563 1946 +383
|
I think some of these failing tests might have bugs. For example, should observed variables be in "prior" as they are assumed by some tests. Or into "observed_data" like happens after this change? |
Can you link to the specific tests that are failing? |
pymc/tests/test_sampling.py::TestSamplePriorPredictive::test_shared pymc/tests/test_step.py::TestStepMethods::test_step_continuous I have no idea why the second test fails. The first one does since gen1.prior doesn't have have the observation y after this commit. That instead is in observed_data or prior_predictive. This test fails even if all discrete variables in the model are made continuous. |
Thanks @zaxtax It seems that the first test just needs to be tweaked for the new (correct) expectation. For the second test, are you sure it is not just flaky? If not, can you narrow down which of the steppers is failing? |
I think it's just flakey. I can make it fail less often if use more tuning steps. I don't know what it's suppose to test other than that all the step methods work. |
We can just open an issue about the newly observed flakiness and we can ignore it for this PR. Either it is something that already happened or something subtle changed and should be addressed in a separate PR anyway. There is no reason why these changes here should affect sampling, as that happens before any inferencedata conversion. By the way do you see it failing in specific step methods? And do you mind opening that issue? |
I'm going to make the test a bit more robust and open another PR for the observed flakiness. |
01f061b
to
b17ba41
Compare
pymc/tests/test_step.py
Outdated
@@ -117,11 +117,11 @@ def test_step_continuous(self): | |||
for step in steps: | |||
idata = sample( | |||
0, | |||
tune=8000, | |||
tune=12000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not include this change in this PR. This test is quite lengthy already, and it's better to let it fail now and then to motivate someone to properly fix it, instead of monkey-patching it every time it fails in unrelated PRs.
Otherwise your PR is ready to merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, I dropped those commits for now.
b17ba41
to
202d8a0
Compare
@ricardoV94 ooo and it passes all the checks (for now) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for the help @zaxtax!
This is adds a cast check to
extracting_obs_data
fixing #5586