-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
As of version 0.21.0, Ava's built-in assertions do not influence the execution
of the test body. Specifically, when an assertion is violated, the test
continues to execute.
This behavior was previously reported as a bug (via gh-220). It was considered
"fixed" (via gh-259) with a patch that modified Ava's output.
This is a feature request for a behavior change: forcibly interrupt test
execution when an assertion fails via a runtime exception.
My use case is taking screen shots to assist debugging of integration test
failures. I have been able to react to failing tests programatically through a
combination of the afterEach
and afterEach.always
methods. When a test
fails, I would like to capture an image of the rendered application as this
can be very useful in identifying the cause of the failure (especially when the
tests run remotely on a continuous integration server).
Because the test body continues to execute following the failure, by the time
the afterEach.always
method is invoked, the rendered output may no longer
reflect the state of the application at the moment of failure.
For unit tests, this might be addressed by making test bodies shorter and more
direct. Reducing tests to contain only one meaningful interaction would avoid
the effect described above. Because integration tests have high "set up" costs
and because they are typically structured to model complete usage scenarios,
this is not an appropriate solution for my use case.
Ava supports the usage of a general-purpose assertion library (e.g. as
Node.js's built-in assert
module), and I've found that because these
libraries operate via JavaScript exceptions, they produce the intended results.
In the short-term, I am considering switching to one of these libraries.
However, Ava's built-in assertions have a number of advantages over generic
alternatives. In addition, restricting the use of Ava's API in my test suite
will be difficult moving forward--even with documentation in place,
contributors may not recognize that certain aspects of the API are considered
"off limits", especially considering that their usage does not directly effect
test correctness.
I haven't been able to think of a use case that would be broken by the change I
am requesting. But if there is such a use case, then this behavior could be
made "opt-in' via a command-line flag.
Thanks for your time, and thanks for the great framework!