
For large matr ices, the number of multiplication/division operations and addi-
tion/subtraction operations required to carry out the eliminat ion routine is rather
formidable. For large sparse matrices , i.e. matrices that consist of many zeros,
traditional Gaussian elimination results in many wasteful floating-point operations
since addition or subtraction of any numb er with zero or multiplication of two zeros
are unnecessary steps. Tridiagonal matrices are sparse matrices that have non-zero
elements only along the main diagonal, and one diagonal above and below the main,
and are often encountered in engineering problems (see Problems 2.7– 2.9). Such
matrices that have non-zero elements only along the main diagonal and one or more
other smaller diagonals to the left/right of the main diagonal are called banded
matrices. Special elimination methods, such as the Thomas method, which are faster
than O(n
3
) are employed to reduce tridiagonal (or banded) matrices to the appro-
priate triangular form for performing back substitution.
2.5 LU factorization
Analysis of the number of floating-point operations (flops) carried out by a numerical
procedure allows one to compare the efficiency of different computational methods
in terms of the computational time and effort expended to achieve the same goal. As
mentioned earlier, Gaussian elimination requires O(n
3
/3) multiplication/division oper-
ations and O(n
3
/3) addition/subtraction operations to reduce an augmented matrix
that represents n equations with n unknowns to a upper triangular matrix (see
Section 1.6.1 for a discussion on “order of magnitude”). Back substitution is an O
(n
2
) process and is thus achieved much more quickly than the elimination method for
large matrices. If we wish to solve systems of equations whose coefficient matrix A
remains the same but have different b vectors using the Gaussian elimination method,
we would be forced to carry out the same operations to reduce the coefficients in A in
the augmented matrix every time. This is a wasteful process considering that most of
the computational work is performed during the elimination process.
This problem can be addressed by decoupling the elimination steps carried out on
A from that on b . A can be converted into a product of two matrices: a lower
triangular matrix L and an upper triangular matrix U, such that
A ¼ LU:
The matrix equation Ax = b now becomes
LUx ¼ b: (2:18)
Once A has been factored into a product of two triangular matrices, the solution is
straightforward. First, we substitute
y ¼ Ux (2:19)
into Equation (2.18) to obtain
Ly ¼ b : (2:20)
In Equation (2.20), L is a lower triangular matrix, and y can be easily solved for by the
forward substitution method, which is similar to the backward substitution process
except that the sequence of equations used to obtain the values of the unknowns starts
with the first equation and proceeds sequentially downwards to the next equation.
Once y is known, Equation (2.19) can easily be solved for x using the backward
substitution method. This procedure used to solve linear systems of equations is called
87
2.5 LU factorization