
110 8. Linear Multistep Methods—V
This equation always has a solution when h = 0 (u = g
n
) and we shall assume
that this continues to be so when h is sufficiently small. Since this is a nonlinear
equation, it may have zero, one or more solutions
1
—in the latter case it makes
sense to choose the solution closest to x
n+k−1
, the value at the previous step.
The approach taken to solve Equation (8.3) depends on the nature of the
problem: if stiffness (see Section 7.2) is not an issue then we shall use either
a fixed-point iteration or pairs of LMMs called predictor-corrector pairs, oth-
erwise the Newton–Raphson method will be used. These are described in the
following sections.
8.2 Fixed-Point Iteration
fixed-point iteration, also known as Picard iteration or the method of succes sive
substitutions, involves making an initial guess, u
[0]
say, and substituting this
into the right-hand side of (8.3), thereby producing the next approximation
to the root. Generally, the next approximation, u
[`+1]
, is computed from u
[`]
using
u
[`+1]
= hβ
k
f(t
n+k
, u
[`]
) + g
n
, ` = 0, 1, 2, . . . . (8.4)
There are a number of immediate issues:
1. Choice of initial guess u
[0]
. Typically, the closer we can choose this to the
(unknown) value of x
n+k
the fewer iterations will be required to obtain an
accurate approximation. An obvious choice is to use the solution x
n+k−1
from the previous time step. In the next section we describe an improve-
ment on this by making use of a “predictor”.
2. Does the sequence u
[`]
converge? To analyse this, we suppose that u
[`]
=
x
n+k
+ E
[`]
and then, using the vector form of the Taylor expansion (C.3)
in Appendix C, we find
f(t
n+k
, u
[`]
) = f (t
n+k
, x
n+k
+ E
[`]
)
≈ f (t
n+k
, x
n+k
) +
∂f
∂x
(t
n+k
, x
n+k
)E
[`]
. (8.5)
Substituting this into the right-hand side of (8.4) and subtracting Equation
(8.3) from the result gives
E
[`+1]
≈ hβ
k
BE
[`]
, (8.6)
where we have used
B =
∂f
∂x
(t
n+k
, x
n+k
)
1
This issue was address ed for some scalar problems in Exercises 4.2–4.4.