
compression, and plenty of compression methods
have been put to practical use, but we mention here
only a few.
The MRA for orthogonal wavelets gives a
successive procedure to decompose a subspace of
L
2
(R) into a direct sum of two subspaces corre-
sponding to higher- and lower-frequency parts; only
the latter of which is decomposed again into its
higher- and lower-frequency parts. Algebraically,
this procedure was already known before the
discovery of MRA in filter theory in electrical
engineering, where a discretely sampled signal is
convoluted with a filter series to give, for example, a
high-pass-filtered or low-pass-filtered series. An
appropriate designed pair of a high-pass and a
low-pass filters followed by the downsampling
yields two new series corresponding to the higher-
and lower-frequency parts, respectively, which are
then reversible by another two reconstruction filters
with the upsampling. These four filters which are
often employed in a widely us ed technique of ‘‘sub-
band coding’’ then constitute a perfect reconstruc-
tion filter bank. Under some conditions, successive
applications of this decomposition process to the
series of lower-frequency parts, which is equivalent
to the nesting structure of MRA, have been used for
data compression (quadrature mirror filter). A
famous example is a data compression system of
FBI for finger prints, consisting of wavelet coding
with scalar quantization.
In MRA, however, it is only the lower-frequency
parts that are successively decomposed. If both the
lower- and the higher-frequency parts are repeatedly
decomposed by the decomposition filters, then the
successive convolution processes correspond to a
decomposition of data function by a set of wavelet-
like functions, called ‘‘wavelet packet,’’ where there
are choices whether to decompose the higher- and/or
the lower-frequency parts. The best wavelet packet, in
the sense of the entropy, for example, within a
specified number of decompositions, often provides
with a powerful tool for data compression in several
areas, including speech analysis and image analysis.
We also note that from the viewpoint of the best basis
which minimizes the statistical mean square error of
the thresholded coefficients, an orthonormal wavelet
basis gives a good concentration of the energy if the
original signal is a piecewise smooth function super-
imposed by a white noise, which is thus efficiently
removed by thresholding the coefficients. The effi-
ciency of a wavelet expansion of a signal is sometimes
evaluated with the entropy of ‘‘probability’’ defined as
j
j,k
j
2
=jjf jj
2
. A better wavelet can be selected by
reducing the entropy, practically from among some
set of wavelets, and its restricted expansion coefficients
give a compressed signal. One of the systematic
methods to generate such a suitable basis is also to
employ the wavelet packets.
Numerical Calculation
Application of wavelet transform, especially of the
DWT, to numerical solver for a differential equation
(DE) has long been studied. At the first sight, the
wavelets appear to give a good DE solver because
the wavelet expansion is generally quite efficient
compared to Fourier series due to its spatial
localization. But its implementation to an efficient
computer code is not so straight forward; research is
still continuing for concrete problems. Application
of the CWT to spectral method for partial differ-
ential equation (PDE) has been studied extensively.
There is no wavelet which diagonalizes the differ-
ential operator @=@x; therefore, an efficient numer-
ical method is necessary for derivatives of wavelets.
Products of wavelets also yield another numerical
problem. MRA brings about mesh points which are
adaptive to some extent, but finite element method
still gives more flexible mesh points.
For some scaling-invariant differential or integral
operators, including @
2
=@x
2
, Abel transformations,
and Reisz potential, adaptive biorthogonal wavelets
can be provided with block-diagonal Galerkin
representations, which has been applied to data
processing. Generally, simultaneous localization of
wavelets, both in space and in scale, leads to a
sparse Galerkin representation for many pseudodif-
ferential operators and their inverses. A threshold-
ing technique with DWT has been introduced to
coherent vortex simulation of the 2D Navier–Stokes
equations, to reduce the relevant wavelet co effi-
cients. Another promising application of wavelet
occurs as a preprocessor for an iterative Poisson
solver, where a wavelet-based preconditioning leads
to a matrix with a bounded condition number.
Other Wavelets and Generalizations
Several new types of wavelets have been proposed:
‘‘coiflet’’ whose scaling function has vanishing
moments giving expansion coefficients approxi-
mately equal to values of the data functions, and
‘‘symlet’’ which is an orthonormal wavelet with a
nearly symmetric profile . Multiwavelets are wavelets
which give a complete orthonormal system in L
2
space. In 2D or multidimensional applications of the
DWT, separable orthonormal wavelets consisting of
tensor products of 1D orthonormal wavelets are
frequently used, while nonseparable orthonormal
wavelets are also available. Another generalization
Wavelets: Applications 425