
80
Chapter 7 xUnit Basics
considered to be successful. In general, xUnit does not do anything special for
successful tests—there should be no need to examine any output when a Self-
Checking Test (page 26) passes.
A test is considered to have failed when an assertion fails. That is, the test
asserts that something should be true by calling an Assertion Method, but
that assertion turns out not to be the case. When it fails, an Assertion Method
throws an assertion failure exception (or whatever facsimile the programming
language supports). The Test Automation Framework increments a counter for
each failure and adds the failure details to a list of failures; this list can be ex-
amined more closely later, after the test run is complete. The failure of a single
test, while signifi cant, does not prevent the remaining tests from being run; this
is in keeping with the principle Keep Tests Independent (see page 42).
A test is considered to have an error when either the SUT or the test itself
fails in an unexpected way. Depending on the language being used, this problem
could consist of an uncaught exception, a raised error, or something else. As
with assertion failures, the Test Automation Framework increments a counter
for each error and adds the error details to a list of errors, which can then be
examined after the test run is complete.
For each test error or test failure, xUnit records information that can be ex-
amined to help understand exactly what went wrong. As a minimum, the name
of the Test Method and Testcase Class are recorded, along with the nature of
the problem (whether it was a failed assertion or a software error). In most
Graphical Test Runners that are integrated with an IDE, one merely has to
(double-) click on the appropriate line in the traceback to see the source code
that emitted the failure or caused the error.
Because the name test error sounds more drastic than a test failure, some
test automaters try to catch all errors raised by the SUT and turn them into test
failures. This is simply unnecessary. Ironically, in most cases it is easier to deter-
mine the cause of a test error than the cause of a test failure: The stack trace for
a test error will typically pinpoint the problem code within the SUT, whereas
the stack track for a test failure merely shows the location in the test where
the failed assertion was made. It is, however, worthwhile using Guard Asser-
tions (page 490) to avoid executing code within the Test Method that would
result in a test error being raised from within the Test Method
4
itself; this is just
a normal part of verifying the expected outcome of exercising the SUT and does
not remove useful diagnostic tracebacks.
4
For example, before executing an assertion on the contents of a fi eld of an object
returned by the SUT, it is worthwhile to
assertNotNull on the object reference so as to avoid
a “null reference” error.