6.1 Nested Decomposition Procedures 275
The optimal value and solution in terms of prod
1
can be read from Figure 1 as
each cut is added. With only Cut 1, the lowest value of z
1
occurs when prod
1
= 2.
With Cuts 1 and 2, the minimum is also achieved at prod
1
= 2 . Note that the first
cut is not, however, a facet of the objective function’s graph. The cuts meet the
objective at prod
1
= 1andprod
1
= 2 , respectively, but they need not even do
this, as we mentioned earlier (see Exercise 3). The other parts of the Period 1 cuts
are generated from bounds on Q
2
2
.
This example illustrates some of the features of the nested L -shaped method.
Besides our not being guaranteed of obtaining a support of the function at each step,
another possible source of delay in the algorithm’s convergence is degeneracy. As
the example illustrates, the solutions at each step occur at the links of the piecewise
linear pieces generated by the method (Exercises 5 and 5). At these places, many
bases may be optimal so that several bases may be repeated. Some remedies are
possible, as in Birge [1980] and, for deterministic problems, Abrahamson [1983].
As with the standard two-stage L -shaped method, the nested L -shaped method
acquires its greatest gains by combining the solutions of many subproblems through
bunching (or sifting). In addition, multicuts are valuable in multistage as well as
two-stage problems. Infanger [1991, 1994] has also suggested the uses of generat-
ing many cuts simultaneously when future scenarios all have similar structure. This
procedure may make bunching efficient for periods other than H by making ev-
ery constraint matrix identical for all scenarios in a given period. In this way, only
objective and right-hand side constraint coefficients vary among the different sce-
narios.
In terms of primal decomposition, we mentioned the work of No¨el and Smeers
at the outset of this chapter. They apply nested Dantzig-Wolfe decomposition to the
dual of the original problem. As we saw in Chapter 5, this is equivalent to applying
outer linearization to the primal problem. The only difference is that they allow for
some nonlinear terms in their constraints, which would correspond to a nonlinear
objective in the primal model. Because the problems are still convex, nonlinearity
does not really alter the algorithm. The only problem may be in the finiteness of
convergence.
The advantage of a primal or dual implementation generally rests in the problem
structure, although primal or dual simplex may be used in either method, making
them indistinguishable. Gassmann [1990] presents some indication that dual iter-
ations may be preferred in bunching. In general, many primal columns and few
rows would tend to favor a primal approach (outer linearization as in the L -shaped
method) while few columns and many rows would tend to favor a dual approach.
In any case, the form of the algorithm and all proofs of convergence apply to either
form.
While nested decomposition (and other linearization methods) are particularly
well-suited for linear problems, the general methods apply equally well for con-
vex nonlinear problems (i.e., problems with convex, time-separable objectives and
convex constraints, see Exercise 8). Birge and Rosa [1996] describe a nested de-
composition of this form applied to global energy-economy-environmentinteraction