
79
In
 general, Gaussian elimination
 is
 numerically unstable unless partial pivoting
 is
 used.
 Partial
pivoting
 is the
 technique
 of
 interchanging rows
 to get the
 largest possible pivot entry. This ensures
that
 all of the
 multipliers appearing
 in the
 algorithm
 are
 bounded above
 by
 one,
 and in
 virtually
every
 case,
 that
 roundoff
 error does
 not
 increase unduly. There
 are
 special classes
 of
 matrices,
most notably
 the
 class
 of
 symmetric positive
 definite
 matrices,
 for
 which
 Gaussian elimination
 is
provably
 stable
 with
 no row
 interchanges.
10.2.
 Solving
 sparse
 linear systems
473
5.
 Explain
 how to use the
 information stored
 in the
 data
 structure suggested
 in
the
 text
 to
 solve
 an
 inhomogeneous Dirichlet problem.
6.
 Consider
 the
 data
 structure suggested
 in the
 text.
 What information
 is
 lacking
that
 is
 needed
 to
 solve
 an
 inhomogeneous Neumann problem?
10.2 Solving
 sparse
 linear
 systems
As
 we
 have mentioned several times,
 the
 finite
 element method
 is a
 practical
 nu-
merical method, even
 for
 large two-
 and
 three-dimensional problems, because
 the
matrices
 that
 it
 produces have mostly zero entries.
 A
 matrix
 is
 called
 sparse
 if a
large
 percentage
 of its
 entries
 are
 known
 to be
 zero.
 In
 this
 section,
 we
 wish
 to
briefly
 survey
 the
 subject
 of
 solving sparse linear systems.
Methods
 for
 solving linear systems
 can be
 divided into
 two
 categories.
 A
method
 that
 will
 produce
 the
 exact solution
 in a
 finite
 number
 of
 steps
 is
 called
 a
direct
 method. (Actually, when implemented
 on a
 computer, even
 a
 direct method
will
 not
 compute
 the
 exact
 solution because
 of
 unavoidable
 round-off
 error.
 To be
precise,
 a
 direct method
 will
 compute
 the
 exact
 solution
 in a finite
 number
 of
 steps,
provided
 it is
 implemented
 in
 exact arithmetic.)
 The
 most common direct methods
are
 variants
 of
 Gaussian elimination. Below,
 we
 discuss modifications
 of
 Gaussian
elimination
 for
 sparse
 matrices.
An
 algorithm
 that
 computes
 the
 exact
 solution
 to a
 linear system only
 as
the
 limit
 of an
 infinite
 sequence
 of
 approximations
 is
 called
 an
 iterative method.
There
 are
 many iterative methods
 in
 use;
 we
 will
 discuss
 the
 most popular:
 the
conjugate
 gradient algorithm.
 We
 also touch
 on the
 topic
 of
 preconditioning,
 the
art of
 transforming
 a
 linear system
 so as to
 obtain faster convergence with
 an
iterative method.
10.2.1
 Gaussian
 elimination
 for
 dense
 systems
In
 order
 to
 have some
 standard
 of
 comparison,
 we first
 briefly
 discuss
 the
 standard
Gaussian elimination algorithm
 for the
 solution
 of
 dense systems.
 Our
 interest
 is
in
 the
 operation
 count—the
 number
 of
 arithmetic operations necessary
 to
 solve
 an
n
 x
 n
 dense system.
 The
 basic algorithm
 is
 simple
 to
 write down.
 In the
 following
description,
 we
 assume
 that
 no row
 interchanges
 are
 required,
 as
 these
 do not use
any
 arithmetic operations
 anyway.
79
 The
 following
 pseudo-code solves
 Ax = b,
where
 A €
 R
nxn
 and b
 6
 R
n
,
 overwriting
 the
 values
 of A and b:
10.2.  Solving 
sparse 
linear systems 
473 
5. 
Explain how  to use the information stored in 
the 
data 
structure suggested in 
the 
text 
to 
solve an inhomogeneous Dirichlet problem. 
6. 
Consider the 
data 
structure suggested in 
the 
text. 
What 
information is lacking 
that 
is needed to solve 
an 
inhomogeneous Neumann problem? 
10.2  Solving sparse  linear systems 
As 
we 
have mentioned several times, 
the 
finite element method 
is 
a practical nu-
merical method, even for  large two- and three-dimensional problems, because 
the 
matrices 
that 
it produces have mostly zero entries.  A matrix 
is 
called sparse if a 
large percentage of its entries are known 
to 
be zero.  In this  section, 
we 
wish 
to 
briefly survey the subject of solving sparse linear systems. 
Methods  for  solving  linear  systems  can  be  divided  into two  categories.  A 
method 
that 
will  produce 
the 
exact solution in a finite number of steps 
is 
called a 
direct method.  (Actually, when implemented on a computer, even a direct method 
will  not compute 
the 
exact solution because of unavoidable round-off error.  To be 
precise, a direct method will compute 
the 
exact solution in a finite number of steps, 
provided it 
is 
implemented in exact arithmetic.)  The most common direct methods 
are variants of Gaussian elimination.  Below, 
we 
discuss modifications of Gaussian 
elimination for sparse matrices. 
An  algorithm 
that 
computes  the exact  solution  to a  linear  system only  as 
the 
limit of 
an 
infinite sequence of approximations 
is 
called  an iterative method. 
There are many iterative methods in  use; 
we 
will  discuss the most popular: 
the 
conjugate gradient algorithm. 
We 
also touch on 
the 
topic of preconditioning, the 
art 
of transforming  a  linear  system 
so 
as 
to 
obtain  faster  convergence  with  an 
iterative method. 
10.2.1 
Gaussian 
elimination for 
dense 
systems 
In order 
to 
have some standard of comparison, 
we 
first briefly discuss the standard 
Gaussian elimination algorithm for  the solution of dense systems.  Our interest 
is 
in 
the 
operation 
count-the 
number of arithmetic operations necessary to solve 
an 
n x n dense system.  The basic algorithm 
is 
simple 
to 
write down.  In 
the 
following 
description, 
we 
assume 
that 
no row interchanges are required, as these do  not use 
any arithmetic operations anyway. 
79 
The following  pseudo-code solves 
Ax 
= 
b, 
where A  E 
Rnxn 
and 
bERn, 
overwriting the values of A and 
b: 
for i =  1,2, 
... 
, n - 1 
for 
j  = i + 
1, 
i + 2, 
... 
, n 
k· 
t-
~ 
______ 
3'_ 
Aii 
79In general, Gaussian elimination is numerically unstable unless partial pivoting is used. 
Partial 
pivoting is 
the 
technique of interchanging rows 
to 
get 
the 
largest possible pivot entry. 
This 
ensures 
that 
all of 
the 
multipliers appearing in 
the 
algorithm are 
bounded 
above by one, 
and 
in virtually 
every  case, 
that 
roundoff error does 
not 
increase unduly. 
There 
are 
special classes of matrices, 
most notably 
the 
class of symmetric positive definite matrices, for which Gaussian elimination is 
provably stable with no row interchanges.