optimization: Negative coefficients signal the potential for improvement in a maximi-
zation problem, and positive coefficients signal the potential for improvement in a
minimization problem. The selection of the leaving variable relates only to feasibility:
The minimum ratio identifies the equation containing the leaving variable, thereby
assuring that the new set of basic variables remains nonnegative. Therefore, the selec-
tion criterion is the same for minimization and maximization. One additional feature is
worth noting. When no positive coefficients appear in the column for the entering vari-
able (that is, when there are no ratios to be formed), this pattern indicates that the sol-
ution is unbounded. In other words, by making the entering variable positive—and as
large as we wish—we generate a value of the objective function as large as we wish.
Therefore, the objective is unbounded. If Solver encounters no positive coefficients in
the column for the entering variable, it reports that the objective function is unbounded.
As described in Box A3.1, the simplex method produces an optimal solution,
provided that we can initiate it with a basic feasible solution. This is not a difficult
task when all the constraints are LT constraints and their right-hand sides are positive,
as demonstrated in the example. A slack variable is added to each constraint in order to
convert the inequality to an equation, and then all variables other than the slack vari-
ables are set equal to zero. The slack variables appear one in each constraint, and each
with a coefficient of 1, so they form a natural starting basic feasible solution. But what
happens when the problem does not come with LT constraints?
Suppose instead we have a linear programming problem in which all constraints
are equations in the original model, and in which all constraint constants are non-
negative. Now we add one variable to each equation with a coefficient of 1. These
are called artificial variables. They resemble slack variables in that they allow us
to form an initial basic feasible solution. They differ from slack variables in one
important way: Whereas slack variables may remain positive throughout the
various iterations of the simplex algorithm, and even in the optimal solution, artificial
variables must all be driven to zero to feasibly satisfy the original constraints. This
feature allows the simplex method to be implemented in two phases. In phase I,
as it is called, the objective is to minimize the sum of the artificial variables. At
the end of this phase, the solution on hand must be a basic feasible solution for the
original problem. From that solution, we enter phase II, returning to the original objec-
tive function and following the steps outlined in Box A3.1 in order to reach an
optimum.
The so-called two-phase simplex method has another feature. If phase I cannot
reach an objective function with value zero—that is, if it is impossible to drive all arti-
ficial variables to zero, then we know that no feasible solution exists. Thus, a failure of
phase I prompts Solver to display the message that it could not find a feasible solution;
it is a systematic procedure for detecting inconsistency among the constraints of a
model.
Having addressed EQ constraints, suppose now we have a problem in which all
constraints are GT constraints with RHS constants that are nonnegative. We handle
GT constraints by converting them to equations and inserting two variables, an artifi-
cial variable and a surplus variable. Just as a slack variable converts an LT constraint to
an equality by measuring the amount by which right-hand side exceeds left-hand side,
A3.2. Variations of the Algorithm 391