
36
Chapter 4    Philosophy of Test Automation
Once the subordinate classes have been built, we could remove the Test Doubles
from many of the tests. Keeping them provides better Defect Localization at the 
cost of potentially higher test maintenance cost.
State or Behavior Verifi cation? 
From writing code outside-in, it is but a small step to verifying behavior rather 
than just state. The “statist” view suggests that it is suffi cient to put the SUT 
into a specifi c state, exercise it, and verify that the SUT is in the expected state 
at the end of the test. The “behaviorist” view says that we should specify not 
only the start and end states of the SUT, but also the calls the SUT makes to its 
dependencies. That is, we should specify the details of the calls to the “outgoing 
interfaces” of the SUT. These indirect outputs of the SUT are outputs just like 
the values returned by functions, except that we must use special measures to 
trap them because they do not come directly back to the client or test. 
The behaviorist school of thought is sometimes called behavior-driven 
development. It is evidenced by the copious use of Mock Objects or Test 
Spies (page 538) throughout the tests. Behavior verification does a better 
job of testing each unit of software in isolation, albeit at a possible cost of 
more difficult refactoring. Martin Fowler provides a detailed discussion of 
the statist and behaviorist approaches in [MAS]. 
Fixture Design Upfront or Test-by-Test? 
In the traditional test community, a popular approach is to defi ne a “test bed” 
consisting of the application and a database already populated with a variety of 
test data. The content of the database is carefully designed to allow many differ-
ent test scenarios to be exercised. 
When the fi xture for xUnit tests is approached in a similar manner, the test 
automater may defi ne a Standard Fixture (page 305) that is then used for all 
the Test Methods of one or more Testcase Classes (page 373). This fi xture may 
be set up as a Fresh Fixture (page 311) in each Test Method using Delegated
Setup (page 411) or in the setUp method using Implicit Setup (page 424). Alter-
natively, it can be set up as a Shared Fixture (page 317) that is reused by many 
tests. Either way, the test reader may fi nd it diffi cult to determine which parts of 
the fi xture are truly pre-conditions for a particular Test Method.
The more agile approach is to custom design a Minimal Fixture (page 302) 
for each Test Method. With this perspective, there is no “big fi xture design up-
front” activity. This approach is most consistent with using a Fresh Fixture.