
matrix-valued function m() with the following
properties:
mðÞ is analytic in Cn ½1a
m
þ
ðÞ¼m
ðÞvðÞ for 2
where m
þ
ðÞðm
ðÞÞ is the limit
of m from the þ ðÞ side of ½1b
mðÞ!I (identity matrix) as !1 ½1c
The precise sense in which the limit at 1 and the
boundary values m
are attained are technical
matter that should be specified for each given RH
problem (, v).
Concerning the name RH problem we note that
in literature (particularly, in the theory of bound-
ary values of analytic functions), the problem of
reconstructing a f unction from its jump across a
curve is often called th e Hilbert bou ndary-value
problem. The closely related problem of analytic
matrix factorization (given and v,findG()
analytic and nondegenerate in Cn such that
G
þ
G
= v on ) is sometimes called the Riemann
problem. The name ‘‘RH problem’’ is also
attributed to the reconstruction of a Fuchsian
system with given poles and a given monodromy
group.
In applications, the jump matrix v also depends
on certain parameters, in which the original problem
at hand is naturally formulated (e.g., v = v(; x, t)in
applications to the integrable nonlinear differential
equations in dim ension 1 þ 1, with x being the space
variable an d t the time variable), and the main
concern is the behavior of the solution of the RH
problem, m(; x, t), as a function of x and t.
Particular interest is in the behavior of m(; x, t)as
x and t become large.
In the scalar case, N = 1, rewriting the original
multiplicative jump condition in the additive form
log m
þ
ðÞ¼log m
þ
ðÞþlog vðÞ
and using the Cauchy–Plemelj–Sokhotskii formula
give an explicit integral representation for the
solution
mðÞ¼exp
1
2i
Z
log vð Þ
d
½2
(in the case of nonzero index, log vj
6¼ 0, formula
[2] admits a suitable modification).
A generic (nonabelian) matrix RH problem
cannot be solved explicitly in terms of contour
integrals; however, it can always be reduced to a
system of linear singular-integral equations, thus
linearizing an originally nonlinear system.
The main benefit of reducing an originally non-
linear problem to the analytic factorization of a
given mat rix function arises in asymptotic analysis.
Typically, the dependence of the jump matrix on the
external parameters (say, x and t) is oscillatory. In
analogy of asymptotic evaluation of oscillatory
contour integrals via the classical method of steepest
descent, in the asymptotic evaluation of the solution
m(; x, t) of the matrix RH problem as x, t !1, the
nonlinear steepest-descent method examines the
analytic structure of the jump matrix v(; x, t)in
order to deform the contour to contours where
the oscillatory factors become exponentially small as
x, t !1, and hence the original RH problem
reduces to a collection of local RH problems
associated with the relevant points of stationary
phase. Although the method has (in the matrix case)
noncommutative and nonlinear elements, the final
result of the analysis is as efficient as the asymptotic
evaluation of the oscillatory inte grals.
Dressing Method
The RH method allows describing the solution of a
differential system independently of the theory of
differential equations. The solution might be expli-
cit, that is, given in terms of elementary or elliptic or
abelian functions and contour integrals of such
functions. In general (transce ndental) case, the
solution can be represented in terms of the solution
of certain linear singular integral equations.
In the modern theory of integrable systems, a
system of nonlinear differential equations is often
called integrable if it can be represented as a
compatibility condition of an auxiliary overdeter-
mined linear system of differential equations called a
Lax pair of the given nonlinear system (actually it
might involve more than two linear equations). In
order that the compatibility condition represents a
nontrivial nonlinear system of equations, the Lax
pair is required to depend rationally on an auxiliary
parameter (called a spectral parameter). The RH
problem formulated in the complex plane of the
spectral parameter allows, given a particular solu-
tion of the compatibility equa tions, to construct
directly new solutions of the compatibility system by
‘‘dressing’’ the initial one.
For example, let D(x, ), x 2 R
n
, 2 C be an N N
diagonal, polynomial in with smooth coefficients,
function such that a
j
:= @D=@x
j
are polynomials in
of degree d
j
. Then
0
:= exp D(x, ) solves the
system of linear equations @
0
=@x
j
= a
j
0
, whose
compatibility conditions @
2
0
=@x
j
@x
k
= @
2
0
=@x
k
@x
j
are trivially satisfied. Given a contour and a smooth
function v, consider the matrix RH problem [1]
430 Riemann–Hilbert Methods in Integrable Systems