begins to answer our question about local optima, it is not always an operational result.
Unfortunately, the mathematics of identifying convexity lie beyond the scope of this
book. For most practical purposes, we have a convex feasible region if we have no con-
straints at all or if we have a feasible set of linear constraints. In addition, we have a
convex or concave objective function if it is linear or if it is made up of quadratic
terms or terms involving such smooth functions as e
ax
, log(ax), and the like, with coef-
ficients that are all positive or all negative.
In some problems, we cannot tell whether the requisite conditions are satisfied, so
we cannot be sure whether the solution generated by the GRG algorithm is a global
optimum. In such cases, we may have to tolerate some ambiguity about the optimal
solution. As the previous discussion has suggested, there are two ways we can try
to build some evidence that Solver has located the global optimum. First, we can
plot a graph of the objective function and see whether it provides a confirming picture.
However, this approach is essentially limited to one- or two-variable models. A second
response is to re-run Solver starting from several different initial points to see whether
the same optimal solution occurs. However, there is no theory to tell us how many such
runs to make, how to select the initial points advantageously, or how to guarantee that
a global optimum has been found.
In a nonlinear problem, if we run Solver only once, we may have to be lucky to
obtain the global optimum, unless we know something about the problem’s special
features. To help convince ourselves that Solver has found the global optimum, we
might try rerunning from different starting points to see whether the same solution
occurs. Even then, we can’t be too casual about selecting those starting points; in
the quantity discount example, starting points between 216 and 499 lead to the
same suboptimal result. In a complicated, multidimensional problem, we might
have to test many starting points to give us a reasonable chance of finding a global opti-
mum. However, it was just this type of effort that we were trying to avoid by using
Solver in the first place.
Solver contains a MultiStart option that automates the process of re-running with
different starting values of the decision variables. This option can be found on the
Engine tab of the task pane, in the Global Optimization section. (The default value
for this option is False.) When using this option, it is necessary to specify upper
and lower bounds for all decision variables. The nonnegativity requirement, assuming
it applies, can serve as a lower bound, but the user must supply an upper bound if none
is contained in the original formulation.
Thus, when we use Solver, we must consider whether the algorithm we use,
applied to the optimization problem we have posed, is a reliable algorithm. By that,
we mean the algorithm is guaranteed to find a global optimum whenever one exists.
The simplex method, applied to linear programs, is always reliable. The GRG algor-
ithm, applied to nonlinear programs, is reliable only in special circumstances.
Because the GRG algorithm is the default choice of a solver, it is tempting to use it
when solving linear programming problems as well as nonlinear programming pro-
blems. A problem containing a linear objective function and linear constraints quali-
fies as one of the problem types in which the nonlinear solver is reliable. However, two
possible shortcomings exist in solving linear problems. First, the GRG algorithm may
306
Chapter 8 Nonlinear Programming