352 8 Evaluating and Approximating Expectations
(0,0) to (1,0) is the minimum of Q( ¯x,(1,0)
T
)−Q( ¯x,(0,0)
T
)+(−1,−1)
T
(1,0)=
1−(0.6−1)=1.4andQ( ¯x,(0,0)
T
)−Q( ¯x, (1,0)
T
)+(1,−1)
T
(−1,0)=0.6−(1−
1)=0.6 . Hence, the minimum error on (0, 0) to (1,0) is 0.6 . Similarly, for (0,0)
to (0,1) , the error is 0.6.From (1,0) to (1,1) , the minimum error is 0.3ifthe
(0,1) subgradient is used at (1,1) ; however, the minimum error on (0,1) to (1,1)
is then min{1 −(0.7 −1),0.7−(1
−1)}= 0.7 . Thus, the maximum of these errors
over each edge of the region is 0.7 for the edge (0,1) to (1,1) .
To find the value of c
∗
1
to split the interval [a
1
= 0,b
1
= 1] , we need to find
where Q( ¯x,(0,1)
T
)−c
∗
1
= Q( ¯x,(1, 1)
T
)+(c
∗
1
−1) or where 1−c
∗
1
= 0.7−1+ c
∗
1
,
i.e., where c
∗
1
= 0.65 . We obtain two regions, S
1
=[0,0.65] ×[0, 1] and S
2
=
[0.65,1] ×[0,1] , with p
1
= 0.65 and p
2
= 0.35 .
The Jensen lower bound is now LB
2
= 0.65(Q( ¯x,(0.325, 0.5)
T
))+
(0.35)(Q( ¯x,(0.825, 0.5)
T
)) = 0.65(0.2)+0.35(0.525)=0.31375 . The
upper bounds are UB
max
2
( ¯x)=0.65(1)+0.35(1)=1, UB
mean
2
( ¯x)=0.65(0.5)(1 +
0.65)+0.35(0.5)(1 + 0.7)=0.83375 , and UB
EM
2
( ¯x)=0.65(0.25)(1 + 0.7 +
0.65 + 0.6)+0.35(0.25)(0.7 + 0.7 + 1 + .65)=0.74625 . (The simplicial bound
is not given because we have split the region into rectangular parts.) Exercise 3 asks
for these computations to continue until the lower and upper bounds are within 10%
of each other.
Exercises
1. For Example 1, ¯x =(0.1, 0.7)
T
, compute Q( ¯x) , the Jensen lower bound, and
the upper bounds, UB
mean
, UB
max
, UB
EM
,and UB
Σ
.
2. Prove Jensen’s inequality, E(g(ξ)) ≥ g(E(ξ)) , by taking an expectation of the
points on a supporting hyperplane to g(ξ) at g(E(ξ)) .
3. Follow the splitting rules for Example 1 until the Edmundson-Madansky upper
and Jensen lower bounds are within 10% of each other. Compare UB
EM
to
UB
max
on each step.
8.3 Using Bounds in Algorithms
The bounds in Section 8.2 can be used in algorithms in a variety of ways. We de-
scribe three basic procedures in this section: (1) uses of lower bounds in the L -
shaped method with stopping criteria provided by upper bounds; (2) uses of upper
bounds in generalized programming with stopping rules given by lower bounds; and
(3) uses of the dual formulation in the separable convex hull function. The first two
approaches are described in Birge [1983] while the last is taken from Birge and Wets
[1989].
The L -shaped method as described in Chapter 5 is based on iteratively providing
a lower bound on the recourse objective, Q(x) . If a lower bound, Q
L
(x) ,isused