TOPICS
Search

Conjugate Gradient Method on the Normal Equations


The conjugate gradient method can be applied on the normal equations. The CGNE and CGNR methods are variants of this approach that are the simplest methods for nonsymmetric or indefinite systems. Since other methods for such systems are in general rather more complicated than the conjugate gradient method, transforming the system to a symmetric definite one and then applying the conjugate gradient method is attractive for its coding simplicity.

CGNE solves the system

 (AA^(T))y=b
(1)

for y and then computes the solution

 x=A^(T)y.
(2)

CGNR solves

 (A^(T)A)x=b^~
(3)

for the solution vector x, where

 b^~=A^(T)b.
(4)

If a system of linear equations Ax=b has a nonsymmetric, possibly indefinite (but nonsingular) coefficient matrix, one obvious attempt at a solution is to apply the conjugate gradient method to a related symmetric positive definite system A^(T)Ax=A^(T)b. While this approach is easy to understand and code, the convergence speed of the conjugate gradient method now depends on the square of the condition number of the original coefficient matrix. Thus the rate of convergence of the conjugate gradient procedure on the normal equations may be slow.

Several proposals have been made to improve the numerical stability of this method. The best known is by Paige and Saunders (1982) and is based upon applying the Lanczos method to the auxiliary system

 [I A; A^T 0][r; x]=[b; 0].
(5)

A clever execution of this scheme delivers the factors L and U of the LU decomposition of the tridiagonal matrix that would have been computed by carrying out the Lanczos procedure with A^(T)A.

Another means for improving the numerical stability of this normal equations approach is suggested by Björck and Elfving (1979). The observation that the matrix A^(T)A is used in the construction of the iteration coefficients through an inner product like (p,A^(T)Ap) leads to the suggestion that such an inner product be replaced by (Ap,Ap).


See also

Biconjugate Gradient Method, Chebyshev Iteration, Conjugate Gradient Method, Conjugate Gradient Squared Method, Flexible Generalized Minimal Residual Method, Generalized Minimal Residual Method, Linear System of Equations, Minimal Residual Method, Nonstationary Iterative Method, Preconditioner, Quasi-Minimal Residual Method Stationary Iterative Method, Symmetric LQ Method, Transpose-Free Quasi-Minimal Residual Method

This entry contributed by Noel Black and Shirley Moore, adapted from Barrett et al. (1994) (author's link)

Explore with Wolfram|Alpha

References

Barrett, R.; Berry, M.; Chan, T. F.; Demmel, J.; Donato, J.; Dongarra, J.; Eijkhout, V.; Pozo, R.; Romine, C.; and van der Vorst, H. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd ed. Philadelphia, PA: SIAM, 1994. http://www.netlib.org/linalg/html_templates/Templates.html.Björck, A. and Elfving, T. "Accelerated Projection Methods for Computing Pseudo-Inverse Solutions of Systems in Linear Equations." BIT 19, 145-163, 1979.Paige, C. and Saunders, M. "LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares." ACM Trans. Math. Soft. 8, 43-71, 1982.

Referenced on Wolfram|Alpha

Conjugate Gradient Method on the Normal Equations

Cite this as:

Black, Noel and Moore, Shirley. "Conjugate Gradient Method on the Normal Equations." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. https://mathworld.wolfram.com/ConjugateGradientMethodontheNormalEquations.html

Subject classifications