TOPICS
Search

Eigen Decomposition


The matrix decomposition of a square matrix A into so-called eigenvalues and eigenvectors is an extremely important one. This decomposition generally goes under the name "matrix diagonalization." However, this moniker is less than optimal, since the process being described is really the decomposition of a matrix into a product of three other matrices, only one of which is diagonal, and also because all other standard types of matrix decomposition use the term "decomposition" in their names, e.g., Cholesky decomposition, Hessenberg decomposition, and so on. As a result, the decomposition of a matrix into matrices composed of its eigenvectors and eigenvalues is called eigen decomposition in this work.

Assume A has nondegenerate eigenvalues lambda_1,lambda_2,...,lambda_k and corresponding linearly independent eigenvectors X_1,X_2,...,X_k which can be denoted

 [x_(11); x_(12); |; x_(1k)],[x_(21); x_(22); |; x_(2k)],...[x_(k1); x_(k2); |; x_(kk)].
(1)

Define the matrices composed of eigenvectors

P=[X_1 X_2 ... X_k]
(2)
=[x_(11) x_(21) ... x_(k1); x_(12) x_(22) ... x_(k2); | | ... |; x_(1k) x_(2k) ... x_(kk)]
(3)

and eigenvalues

 D=[lambda_1 0 ... 0; 0 lambda_2 ... 0; | | ... |; 0 0 ... lambda_k],
(4)

where D is a diagonal matrix. Then

AP=A[X_1 X_2 ... X_k]
(5)
=[AX_1 AX_2 ... AX_k]
(6)
=[lambda_1X_1 lambda_2X_2 ... lambda_kX_k]
(7)
=[lambda_1x_(11) lambda_2x_(21) ... lambda_kx_(k1); lambda_1x_(12) lambda_2x_(22) ... lambda_kx_(k2); | | ... |; lambda_1x_(1k) lambda_2x_(2k) ... lambda_kx_(kk)]
(8)
=[x_(11) x_(21) ... x_(k1); x_(12) x_(22) ... x_(k2); | | ... |; x_(1k) x_(2k) ... x_(kk)][lambda_1 0 ... 0; 0 lambda_2 ... 0; | | ... |; 0 0 ... lambda_k]
(9)
=PD,
(10)

giving the amazing decomposition of A into a similarity transformation involving P and D,

 A=PDP^(-1).
(11)

The fact that this decomposition is always possible for a square matrix A as long as P is a square matrix is known in this work as the eigen decomposition theorem.

Furthermore, squaring both sides of equation (11) gives

A^2=(PDP^(-1))(PDP^(-1))
(12)
=PD(P^(-1)P)DP^(-1)
(13)
=PD^2P^(-1).
(14)

By induction, it follows that for general positive integer powers,

 A^n=PD^nP^(-1).
(15)

The inverse of A is

A^(-1)=(PDP^(-1))^(-1)
(16)
=PD^(-1)P^(-1),
(17)

where the inverse of the diagonal matrix D is trivially given by

 D^(-1)=[lambda_1^(-1) 0 ... 0; 0 lambda_2^(-1) ... 0; | | ... |; 0 0 ... lambda_k^(-1)].
(18)

Equation (◇) therefore holds for negative n as well as positive.

A further remarkable result involving the matrices P and D follows from the definition of the matrix exponential

e^(A)=sum_(n=0)^(infty)(A^n)/(n!)
(19)
=sum_(n=0)^(infty)(PD^nP^(-1))/(n!)
(20)
=P(sum_(n=0)^(infty)(D^n)/(n!))P^(-1)
(21)
=Pe^(D)P^(-1).
(22)

This is true since D is a diagonal matrix and

e^(D)=sum_(n=0)^(infty)(D^n)/(n!)
(23)
=sum_(n=0)^(infty)1/(n!)[lambda_1^n 0 ... 0; 0 lambda_2^n ... 0; | | ... |; 0 0 ... lambda_k^n]
(24)
=[sum_(n=0)^(infty)(lambda_1^n)/(n!) 0 ... 0; 0 sum_(n=0)^(infty)(lambda_2^n)/(n!) ... 0; | | ... |; 0 0 ... sum_(n=0)^(infty)(lambda_k^n)/(n!)]
(25)
=[e^(lambda_1) 0 ... 0; 0 e^(lambda_2) ... 0; | | ... |; 0 0 ... e^(lambda_k)],
(26)

so e^(A) can be found using D.


See also

Diagonalizable Matrix, Eigen Decomposition Theorem, Eigenvalue, Eigenvector, Matrix Decomposition, Singular Value Decomposition

Explore with Wolfram|Alpha

Cite this as:

Weisstein, Eric W. "Eigen Decomposition." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/EigenDecomposition.html

Subject classifications