
techniques can roughly be characterized as symbolic or analogic. We start here
with symbolic sonification techniques.
Auditory icons:
Auditory icons, as mentioned earlier, represent specific messages
via an acoustic event that should enable the quick and effortless identification
and interpretation of the signal with respect to the underlying information.
These sounds need to be selected from a database of recordings, or synthesized
according to the data features, which is practically achieved by adapting appro-
priate sound synthesis algorithms (see Section 5.2.3).
Earcons:
Different from auditory icons, earcons use musical motifs to represent
messages and require the user to learn the meaning for each earcon. As a
benefit, earcons can inherit structural properties from language as a more
abstract and highly symbolic form of communication (Blattner, Papp, & Glinert,
1994). These sounds can be built using concatenation, which allows the designer
to compose more complex messages from simple building blocks.
Audification:
In audification, the data “speak for themselves” by using every data
value as a sound sample in a sound signal
s(t)
. Since only variations above 50 Hz
are acoustically perceptible (see Section 5.1), audifications often consume
thousands of samples per second. Thus, the technique is suitable only if (1) enough
data are available, (2) data can be organized in a canonical fashion (e.g., time-
indexed measurements), and (3) data values exhibit variations in the selected
feature variable. Mathematically, audification can be formalized as the creation
of a smooth interpolation function going through a sample of (time, value) pairs
(t
a
,
x
a
)
for all data items
a
. The simplest implementation of audification, however,
is just to use the measurements directly as values in the digital sound signal by
setting
s[n] ¼ x
n
. Some sound examples for audifications demonstrate the typical
acoustic result (S14, S15). S14 is an audification of EEG measurements—one elec-
trode measures the brain activity of a beginning epileptic attack (roughly in the
middle of the sound example). S15 plays the same data at lower time compression.
Clearly the pitch drops below the well-audible frequency range and the epileptic
rhythm is perceived as an audible rhythm of events.
Parameter mapping sonification (PMS):
This is the most widely used sonification
technique for generating an auditory representation of data. Conceptually, the
technique is related to scatter plotting, where features of a data set determine
graphical features of symbols (such as
x
-position,
y
-position, color, size, etc.)
and the overall display is a result of the superposition of these graphical ele-
ments. For example, imagine a data set of measurements for 150 irises. For each
flower, measurements of the petal length, sepal length, petal width, and sepal
width are listed. A parameter mapping sonification (S16) could, for instance,
map the petal length to the onset time of sonic events, the sepal length to the
pitch of sonic events, the petal width to brilliance, and the sepal width to dura-
tion. The resulting sonification would allow the listener to perceive how the data
5 Auditory Interfaces
160