
279
the “playback” part of the tool and point it at the recorded session. It starts up
the SUT and feeds it our recorded inputs in response to the SUT’s outputs. It may
also compare the SUT’s outputs with the SUT’s responses during the recording
session. A mismatch may be cause for failing the test.
Some Recorded Test tools allow us to adjust the sensitivity of the compari-
sons that the tool makes between what the SUT said during the recording ses-
sion and what it said during the playback. Most Recorded Test tools interact
with the SUT through the user interface.
When to Use It
Once an application is up and running and we don’t expect a lot of changes
to it, we can use Recorded Tests to do regression testing. We could also use
Recorded Tests when an existing application needs to be refactored (in anticipa-
tion of modifying the functionality) and we do not have Scripted Tests (page 285)
available to use as regression tests. It is typically much quicker to produce a set
of Recorded Tests than to prepare Scripted Tests for the same functionality. In
theory, the test recording can be done by anyone who knows how to operate
the application; very little technical expertise should be required. In practice,
many of the commercial tools have a steep learning curve. Also, some technical
expertise may be required to add “checkpoints,” to adjust the sensitivity of the
playback tool, or to adjust the test script if the recording tool became confused
and recorded the wrong information.
Most Recorded Test tools interact with the SUT through the user interface. This
approach makes them particularly prone to fragility if the user interface of the
SUT is evolving (Interface Sensitivity; see Fragile Test on page 239). Even small
changes such as changing the internal name of a button or fi eld may be enough
to cause the playback tool to stumble. The tools also tend to record information
at a very low and detailed level, making the tests hard to understand (Obscure
Test; page 186); as a result, they are also diffi cult to repair by hand if they are
broken by changes to the SUT. For these reasons, we should plan on rerecording
the tests fairly regularly if the SUT will continue to evolve.
If we want to use the Tests as Documentation (see page 23) or if we want to
use the tests to drive new development, we should consider using Scripted Tests.
These goals are diffi cult to address with commercial
Recorded Test tools because
most do not let us defi ne a Higher-Level Language (see page 41) for the test
recording. This issue can be addressed by building the Recorded Test capability
into the application itself or by using Refactored Recorded Test.
Recorded Test
Recorded
Test