198 Chapter 6 Numerical methods
sweep”. After this smoothing sweep the solution can be prolonged to the next finer grid,
where the procedure is repeated. Once we have smoothed the solution on the finest grid, we
start ascending back to coarser grids. The interpolation from a finer grid to a coarser grid
is called a “restriction”. The coarser grids now “learn” from the finer grids by comparing
their last solution with the one that comes back from a finer grid. This comparison provides
an estimate for the local truncation error, which can be accounted for with the help of an
artificial source term. On each grid we again perform smoothing sweeps, again improving
the solution because we inherit the smaller truncation error from the finer grids. On the
coarsest grid we again perform a direct solve, as on the finer grids with an artificial source
term that corrects for the truncation error, to improve the global features of the solution.
These sweeps through the grid hierarchy can be repeated until the solution has converged
to a predetermined accuracy.
Another popular method is the conjugate gradient method, which is also designed
for large, sparse matricies. There are discussions in the literature where the method is
implemented for applications in numerical relativity.
15
The good news is that several software packages that provide many of the most efficient
and robust elliptic solvers are publically available. These routines are all coded up, including
for parallel environments.
16
With the help of these packages the user “only” needs to specify
the elements of the matrix A (preferably in some band-diagonal structure so that only the
nonvanishing bands need to be stored) together with the source term s on the right hand
side. The user can then choose between a number of different methods to solve the elliptic
equations.
Before closing this section we briefly discuss nonlinear elliptic equations. Consider an
equation of the form
∇
2
f = f
n
g, (6.54)
where g is a given function and n is some number. Equation (6.54) reduces to the linear
equation (6.40) when n = 0. The equation is still linear for n = 1 and, upon finite differ-
encing, again gives rise to coupled, linear equations that are straighforward to solve by the
same matrix techniques discussed above. We note that equation (4.9) for the lapse function
α in maximal slicing contains a linear source term of this form, with n = 1. But what about
the situation for other values of n, resulting in a nonlinear equation for f ? For example,
the Hamiltonian constraint (3.37) for the conformal factor ψ contains several source terms
with n = 1, n = 5andn =−7.
17
As we shall now sketch, we can solve such a nonlinear
15
See, e.g., Oohara et al. (1997).
16
Examples include PETSc, available at acts.nersc.gov/petsc/,andLAPACK, available at
www.netlib.org/lapack/. Some of the algorithms discussed above are also implemented as modules
(or “thorns”) within Cactus (available at www.cactuscode.org), a parallel interface for performing large-scale
computations on multiple platforms.
17
We remind the reader that, depending on the sign of the exponent n, the solution to equation (6.54) may not be unique;
see exercise 3.8 for an example.