An iterative method for solving an N-by-N (or non-square) linear system
Ax = b involves a sequence of matrix-vector multiplications.
Starting with an initial guess of the solution, each iteration returns a new
estimate of the solution. It is hoped that the estimates converge
to a satisfactory solution (within a tolerance) after
k iterations.
For a dense matrix
A, each iteration takes
O(N2)
operations. An iterative method takes
O(kN2) operations to
converge. For a sparse system, matrix-vector multiplication takes only
O(nNonZeros) where
nNonZeros is the number of non-zeros in the
sparse matrix. Therefore, an iterative method can be much faster than a
traditional direct method of solving a linear system such as taking inverse.
An iterative method using sparse matrices is much faster than one using dense
matrices.
Here are some guidelines for choosing an iterative solver of a sparse system.
For Hermitian problems, if the system is positive definite,
use
CG or
MINRES, otherwise use only MINRES.
To avoid doing inner products in CG or MINRES, we may choose the stationary
methods such as Jacobi,
Gauss-Seidel,
SOR, or
SSOR.
These methods saves computation costs in each iteration but the number of
iterations may increase unless there is a good preconditioner.
For non-Hermitian problems, the choice is not so easy.
If matrix-vector multiplication is very expensive,
GMRES is probably the best
choice because it performs the fewest multiplications.
The second best alternatives are
QMR
or
BiCG.
QMR is numerically more stable than BiCG.
When the transpose of a matrix is not available, there are transpose-free
methods such as
CGS or
BiCGSTAB.
For non-square systems, there are CG methods for solving over-determined
systems such as
CGNR, and
under-determined systems such as
CGNE.
The use of
Preconditioner
can improve the rate of convergence of an
iterative method. A preconditioner transforms a linear system into one that
is equivalent in the sense that it has the same solution. The transformed
system has more favorable spectral properties which affect convergence rate.
In particular, a preconditioner
M approximates the coefficient
matrix
A, and the transformed system is easier to solve. For example,
M-1Ax = M-1b
has the same solution as the original system
Ax = b. The spectral
properties of its coefficient matrix
M-1A may be more
favorable.
Another way of preconditioning a system is
M1-1AM2-1(M2x) =
M1-1b
The matrices
M1 and
M2 are called left-
and right preconditioners respectively.
There are 3 kinds of preconditioning: left, right, or split.
Left-preconditioning leaves
M2 as
IdentityPreconditioner
. Similarly, right-preconditioning leaves
M1 as
IdentityPreconditioner
.