TOPICS
Search

Eigenvector


Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. 144).

The determination of the eigenvectors and eigenvalues of a system is extremely important in physics and engineering, where it is equivalent to matrix diagonalization and arises in such common applications as stability analysis, the physics of rotating bodies, and small oscillations of vibrating systems, to name only a few. Each eigenvector is paired with a corresponding so-called eigenvalue. Mathematically, two different kinds of eigenvectors need to be distinguished: left eigenvectors and right eigenvectors. However, for many problems in physics and engineering, it is sufficient to consider only right eigenvectors. The term "eigenvector" used without qualification in such applications can therefore be understood to refer to a right eigenvector.

The decomposition of a square matrix A into eigenvalues and eigenvectors is known in this work as eigen decomposition, and the fact that this decomposition is always possible as long as the matrix consisting of the eigenvectors of A is square is known as the eigen decomposition theorem.

Define a right eigenvector as a column vector X_R satisfying

 AX_R=lambda_RX_R,
(1)

where A is a matrix, so

 (A-lambda_RI)X_R=0,
(2)

which means the right eigenvalues must have zero determinant, i.e.,

 det(A-lambda_RI)=0.
(3)

Similarly, define a left eigenvector as a row vector X_L satisfying

 X_LA=lambda_LX_L.
(4)

Taking the transpose of each side gives

 (X_LA)^(T)=lambda_LX_L^(T),
(5)

which can be rewritten as

 A^(T)X_L^(T)=lambda_LX_L^(T).
(6)

Rearrange again to obtain

 (A^(T)-lambda_LI)X_L^(T)=0,
(7)

which means

 det(A^(T)-lambda_LI)=0.
(8)

Rewriting gives

0=det(A^(T)-lambda_LI)=det(A^(T)-lambda_LI^(T))
(9)
=det(A-lambda_LI)^(T)
(10)
=det(A-lambda_LI),
(11)

where the last step follows from the identity

 det(A)=det(A^(T)).
(12)

Equating equations (◇) and (11), which are both equal to 0 for arbitrary A and X, therefore requires that lambda_R=lambda_L=lambda, i.e., left and right eigenvalues are equivalent, a statement that is not true for eigenvectors.

Let X_R be a matrix formed by the columns of the right eigenvectors and X_L be a matrix formed by the rows of the left eigenvectors. Let

 D=[lambda_1 ... 0; | ... |; 0 ... lambda_n].
(13)

Then

AX_R=X_RD
(14)
X_LA=DX_L
(15)

and

X_LAX_R=X_LX_RD
(16)
X_LAX_R=DX_LX_R,
(17)

so

 X_LX_RD=DX_LX_R.
(18)

But this equation is of the form

 CD=DC
(19)

where D is a diagonal matrix, so it must be true that C=X_LX_R is also diagonal. In particular, if A is a symmetric matrix, then the left and right eigenvectors are simply each other's transpose, and if A is a self-adjoint matrix (i.e., it is Hermitian), then the left and right eigenvectors are adjoint matrices.

Eigenvectors may not be equal to the zero vector. A nonzero scalar multiple of an eigenvector is equivalent to the original eigenvector. Hence, without loss of generality, eigenvectors are often normalized to unit length.

While an n×n matrix always has n eigenvalues, some or all of which may be degenerate, such a matrix may have between 0 and n linearly independent eigenvectors. For example, the matrix [1 1; 0 1] has only the single eigenvector (1,0).

Eigenvectors may be computed in the Wolfram Language using Eigenvectors[matrix]. This command always returns a list of length n, so any eigenvectors that are not linearly independent are returned as zero vectors. Eigenvectors and eigenvalues can be returned together using the command Eigensystem[matrix].

Given a 3×3 matrix A with eigenvectors x_1, x_2, and x_3 and corresponding eigenvalues lambda_1, lambda_2, and lambda_3, then an arbitrary vector y can be written

 y=b_1x_1+b_2x_2+b_3x_3.
(20)

Applying the matrix A,

Ay=b_1Ax_1+b_2Ax_2+b_3Ax_3
(21)
=lambda_1(b_1x_1+(lambda_2)/(lambda_1)b_2x_2+(lambda_3)/(lambda_1)b_3x_3),
(22)

so

 A^ny=lambda_1^n[b_1x_1+((lambda_2)/(lambda_1))^nb_2x_2+((lambda_3)/(lambda_1))^nb_3x_3].
(23)

If lambda_1>lambda_2,lambda_3, and b_1!=0, it therefore follows that

 lim_(n->infty)A^ny=lambda_1^nb_1x_1,
(24)

so repeated application of the matrix to an arbitrary vector amazingly results in a vector proportional to the eigenvector with largest eigenvalue.


See also

Eigen Decomposition, Eigen Decomposition Theorem, Eigenfunction, Eigenvalue, Left Eigenvector, Matrix, Matrix Diagonalization, Matrix Equation, Right Eigenvector Explore this topic in the MathWorld classroom

Explore with Wolfram|Alpha

References

Arfken, G. "Eigenvectors, Eigenvalues." §4.7 in Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 229-237, 1985.Marcus, M. and Minc, H. Introduction to Linear Algebra. New York: Dover, p. 145, 1988.Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T. "Eigensystems." Ch. 11 in Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Cambridge, England: Cambridge University Press, pp. 449-489, 1992.

Referenced on Wolfram|Alpha

Eigenvector

Cite this as:

Weisstein, Eric W. "Eigenvector." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/Eigenvector.html

Subject classifications