
244
Chapter 16 Behavior Smells
Why Do We Need 100 Customers?
A software development coworker of mine was working on a project
as an analyst. One day, the manager she was working for came into her
offi ce and asked, “Why have you requested 100 unique customers be cre-
ated in the test database instance?”
As a systems analyst, my coworker was responsible for helping the busi-
ness analysts defi ne the requirements and the acceptance tests for a large,
complex project. She wanted to automate the tests but had to overcome
several hurdles. One of the biggest hurdles was the fact that the SUT got
much of its data from an upstream system—it was too complex to try to
generate this data manually.
The systems analyst came up with a way to generate XML from tests
captured in spreadsheets. For the fi xture setup part of the tests, she trans-
formed the XML into QaRun (a Record and Playback Test tool—see
Recorded Test on page 278) scripts that would load the data into the
upstream system via the user interface. Because it took a while to run
these scripts and for the data to make its way downstream to the SUT, the
systems analyst had to run these scripts ahead of time. This meant that
a Fresh Fixture (page 311) strategy was unachievable; a Prebuilt Fix-
ture (page 429) was the best she could do. In an attempt to avoid the
Interacting Tests (see Erratic Test on page 228) that were sure to result
from a Shared Fixture (page 317), the systems analyst decided to imple-
ment a virtual Database Sandbox (page 650) using a Database Partition-
ing Scheme based on a unique customer number for each test. This way,
any side effects of one test couldn’t affect any other tests.
Given that she had about 100 tests to automate, the systems analyst
needed about 100 test customers defi ned in the database. And that’s
what she told her manager.
The failure can show up in the result verifi cation logic even if the problem is that
the inputs of the SUT refer to nonexistent or modifi ed data. This may require ex-
amining the “after” state of the SUT (which differs from the expected post-test
state) and tracing it back to discover why it does not match our expectations.
This should expose the mismatch between SUT inputs and the data that existed
before the test started executing.
The best solution to
Data Sensitivity is to make the tests independent of
the existing contents of the database—that is, to use a Fresh Fixture. If this
is not possible, we can try using some sort of Database Partitioning Scheme
Fragile Test