16.10 The Minimax Theorem 611
Since p
∗
≤ p
∗
is always true, we obtain equality. Choose a point ¯y ∈ Y so that
p( ¯x, ¯y) = p
∗
. Then p( ¯x, y) ≤ p( ¯x, ¯y) ≤ p(x, ¯y) for all x ∈ X and y ∈ Y . That is,
( ¯x, ¯y) is a saddlepoint for p.
By Exercise 16.9.D, the set of saddlepoints is a rectangle A × B. Moreover,
the same argument required in Exercise 16.9.E shows that this rectangle is closed
and convex. ¥
Slippery things can happen at infinity if precautions are not taken. However,
the requirements of the next theorem are often satisfied.
16.10.4. MINIMAX THEOREM.
Suppose that X is a closed convex subset of R
n
and Y is a closed convex subset of
R
m
. Assume that p is convex–concave on X ×Y , and in addition assume that
(1) if X is unbounded, then there is a y
0
∈ Y so that p(x, y
0
) → +∞ as
kxk → ∞ for x ∈ X.
(2) if Y is unbounded, then there is an x
0
∈ X so that p(x
0
, y) → −∞ as
kyk → ∞ for y ∈ Y .
Then p has a nonempty compact convex set of saddlepoints.
PROOF. We deal only with the case in which both X and Y are unbounded. The
reader can find a modification that works when only one is unbounded.
By the hypotheses, max
y∈Y
p(x
0
, y) = α < ∞ and min
x∈X
p(x, y
0
) = β > −∞.
Clearly, β ≤ p(x
0
, y
0
) ≤ α. Set
X
0
= {x ∈ X : p(x, y
0
) ≤ α + 1} and Y
0
= {y ∈ Y : p(x
0
, y) ≥ β + 1}.
Conditions (1) and (2) guarantee that X
0
and Y
0
are bounded, and thus they are
compact and convex. Let A × B be the set of saddlepoints for the restriction of p
to X
0
× Y
0
provided by the compact case.
In particular, let ( ¯x, ¯y) be one saddlepoint, and let c = p( ¯x, ¯y) be the critical
value. Then
β ≤ p( ¯x, y
0
) ≤ p( ¯x, ¯y) = c ≤ p(x
0
, ¯y) ≤ α.
Let x ∈ X \ X
0
, so that p(x, y
0
) > α + 1. Now p(·, y
0
) is continuous, and
hence there is a point x
1
in [x, ¯x] with p(x
1
, y
0
) = α + 1. So x
1
∈ X
0
and x
1
6= ¯x.
Thus x
1
= λx + (1 −λ) ¯x for some 0 < λ < 1. As p(·, ¯y) is convex,
c ≤ p(x
1
, ¯y) ≤ λp(x, ¯y) + (1 −λ)p( ¯x, ¯y) = λp( ¯x, ¯y) + (1 −λ)c.
Hence p( ¯x, ¯y) = c ≤ p(x, ¯y). Similarly, for every y ∈ Y \ Y
0
, we may show that
p( ¯x, y) ≤ c = p( ¯x, ¯y). Therefore, ( ¯x, ¯y) is a saddlepoint in X × Y . ¥
Now let us see how this applies to the problem of constrained optimization.
Consider the convex programming problem: Minimize a convex function f(x) over
the closed convex set X = {x : g
j
(x) ≤ 0, 1 ≤ j ≤ r}. Suppose that it satisfies
Slater’s condition. The Lagrangian L(x, y) = f(x) + y
1
g
1
(x) + ···+ y
r
g
r
(x) is a
convex function of x for each fixed y ∈ R
r
+
, and is a linear function of y for each
x ∈ R
n
. So, in particular, L is convex–concave on R
n
× R
r
+
.