
See also: Hamiltonian Fluid Dynamics; Integrable Systems
and Recursion Operators on Symplectic and Jacobi
Manifolds; Minimax Principle in the Calculus of Variations.
Further Reading
Aebisher B, Borer M, Ka¨lin M, Leuenberger Ch, and Reimann
HM (1994) Symplectic Geometry, Progress in Mathematics,
vol. 124. Basel: Birkha¨ user.
Arnol’d VI (1989) Mathematical Methods of Classical Mechanics,
Graduate Texts in Mathematics, vol. 60, xviþ516, pp. 163–179.
New York: Springer.
Arnol’d VI (1990) Contact Geometry: The Geometrical Method of
Gibbs’s Thermodynamics, Proceedings of the Gibbs Symposium.
(New Haven, CT, 1989), pp. 163–179. Providence, RI: American
Mathematical Society.
Beals R and Greiner P (1988) Calculus on Heisenberg manifolds.
Annals of Mathematics Studies 119.
Eliashberg Y, Givental A, and Hofer H (2000) Introduction to
Symplectic Field Theory, GAFA 2000 (Tel Aviv, 1999), Geom.
Funct. Anal. 2000, Special Volume, Part II, pp. 560–673.
Etnyre J. Legendrian and transversal knots. Handbook of Knot
Theory (in press).
Etnyre J (1998) Symplectic Convexity in Low-Dimensional
Topology, Symplectic, Contact and Low-Dimensional Topol-
ogy (Athens, GA, 1996), Topology Appl., vol. 88, No. 1–2,
pp. 3–25.
Etnyre J and Ng L (2003) Problems in Low Dimensional Contact
Topology, Topology and Geometry of Manifolds (Athens,
GA, 2001), pp. 337–357, Proc. Sympos. Pure Math., vol. 71.
Providence, RI: American Mathematical Society.
Geiges H Contact geometry. Handbook of Differential Geometry,
vol. 2 (in press).
Geiges H (2001a) Contact Topology in Dimension Greater than
Three, European Congress of Mathematics, vol. II (Barcelona,
2000), Progress in Mathematics, vol. 202, pp. 535–545. Basel:
Birkha¨user.
Geiges H (2001b) A brief history of contact geometry and
topology. Expositiones Mathematicae 19(1): 25–53.
Ghrist R and Komendarczyk R (2001) Topological features of
inviscid flows. An Introduction to the Geometry and Topology
of Fluid Flows (Cambridge, 2000), 183–201, NATO Sci. Ser. II
Math. Phys. Chem., vol. 47. Dordrecht: Kluwer Academic.
Giroux E (2002) Ge´ome´trie de contact: de la dimension trois
vers les dimensions supe´rieures, Proceedings of the Inter-
national Congress of Mathematicians, vol. II (Beijing, 2002),
pp. 405–414. Beijing: Higher Ed. Press.
Hofer H and Zehnder E (1994) Symplectic Invariants and
Hamiltonian Dynamics, Birkha¨user Advanced Texts: Basler
Lehrbu¨ cher, pp. xivþ341. Basel: Birkha¨user.
Taylor ME (1984) Noncommutative Microlocal Analysis, Part I,
Mem Amer. Math. Soc., 52, no. 313. American Mathematical
Society.
Control Problems in Mathematical Physics
B Piccoli, Istituto per le Applicazioni del Calcolo,
Rome, Italy
ª 2006 Elsevier Ltd. All rights reserved.
Introduction
Control Theory is an interdisciplinary research area,
bridging mathematics and engineering, dealing with
physical systems which can be ‘‘controlled,’’ that is,
whose evolution can be influenced by some external
agent. A general model can be written as
yðtÞ¼Að t; yð0Þ; uðÞÞ ½1
where y describes the state variables, y(0) the initial
condition, and u() the control function. Thus, eqn
[1] means that the state at time t depends on the
initial condit ion but also on some parameters u
which can be chosen as function of time. To be
precise, there are some control problems which are
not of evolutiona ry type; however, in this presenta-
tion we restrict ourselves to this case.
One has to distinguish among the control set U where
the control function can take values: u(t) 2U,andthe
space of control functions, U, to which each control
function should belong: u() 2U. Thus, for example,
we may have U = R
m
and U = L
1
([0, T], R
m
).
There are various problems one can formulate
regarding systems of type [1], among which:
Controllability Given any two states y
0
and y
1
determine a control function u() such that for
some time t > 0wehavey
1
= A(t, y
0
, u()).
Optimal control Consider a cost function J(y(),
u()) depending both on the evolutions of y and u
and determine a control function
~
u() and a
trajectory
~
y(t) = A(t , y
0
,
~
u()) such that
~
y() steers
the system from y
0
to y
1
, as before, and the cost J
is minimized (or maximized).
Stabilization We say that
y is an equilibrium if
there exists
u 2 U such that A(t,
y,
u) =
y for every
t > 0 (here
u indicates also the constant in time
control function). Determine the control u as
function of the state y so that
y is a (Lyapunov)
stable equilibrium for the uncontrolled dynamical
system y(t ) = A(t, y(0), u(y())).
Observability Assume that we can observe not the
state y, but a function (y) of the state. Determine
conditions on so that the state y can be
reconstructed from the evolution of (y) choosing
u() suitably.
For the sake of simplicity, we restrict ourselves
mainly to the first two problems and just mention
636 Control Problems in Mathematical Physics