602 Convexity and Optimization
By Theorem 16.7.3, since a is in the interior of R
n
, the subdifferential ∂f(a)
is a nonempty compact convex set. Thus the sum ∂f(a) + N
A
(a) is closed and
convex by Exercise 16.2.G. Suppose that (3) fails: 0 /∈ ∂f (a) + N
A
(a). Then we
may apply the Separation Theorem 16.3.3 to produce a vector d and scalar α so
that
sup{hs + n, di : s ∈ ∂f(a), n ∈ N
A
(a)} ≤ α < h0, di = 0.
It must be the case that hn, di ≤ 0 for n ∈ N
A
(a), for if hn, di > 0, then
hs + λn, di = hs, di + λhn, di > 0
for very large λ. Therefore, d belongs to N
A
(a)
◦
= T
A
(a) by Corollary 16.8.5.
Now take n = 0 and apply Corollary 16.7.9 to compute
f
0
(a;d) = sup{hs, di : s ∈ ∂f(a)} ≤ α < 0.
Thus (2) fails. Contrapositively, (2) implies (3). ¥
Theorem 16.9.1 is a fundamental and very useful result. In particular, condi-
tion (3) does not depend on where a is in the set A. For example, if a is an interior
point of A, then N
A
(a) = {0} and this theorem reduces to Proposition 16.7.2.
Given that all we know about the constraint set is that it is convex, this theorem is
the best we can do. However, when the constraints are described in other terms,
such as the sublevel sets of convex functions, then we can find more detailed char-
acterizations of the optimal solutions.
16.9.2. DEFINITION. By a convex program, we mean the ingredients of a
minimization problem involving convex functions. Precisely, we have a convex
function f on R
n
to be minimized. The set over which f is to be minimized is
not given explicitly but instead is determined by constraint conditions of the form
g
i
(x) ≤ 0, where g
1
, . . . , g
r
are convex functions. The associated problem is
Minimize f(x)
subject to constraints g
1
(x) ≤ 0, . . . , g
r
(x) ≤ 0.
We call a ∈ R
n
a feasible vector for the convex program if a satisfies the con-
straints, that is, g
i
(a) ≤ 0 for i = 1, . . . , r. A solution a ∈ R
n
of this problem is
called an optimal solution for the convex program, and f(a) is the optimal value.
The set over which f is minimized is the convex set A =
T
1≤i≤r
{x : g
i
(x) ≤ 0}.
The r functional constraints may be combined to obtain A = {x : g(x) ≤ 0},
where g(x) = max{g
i
(x) : 1 ≤ i ≤ r}. The function g is also convex. This is
useful for technical reasons, but in practice the conditions g
i
may be superior (for
example, they may be differentiable). It is better to be able to express optimality
conditions in terms of the g
i
themselves.
In order to solve this problem, we need to impose some sort of regularity con-
dition on the constraints that allows us to use our results about sublevel sets.