Linear endomorphisms , represented by , can exhibit invariant directions for which
known as eigenvectors, with associated eigenvalue . Eigenvectors are non-zero elements of the null space of ,
and the null-space is referred to as the eigenspace of for eigenvalue , .
Non-zero solutions are obtained if is rank-deficient (singular), or has linearly dependent columns in which case
From the determinant definition as “sum of all products choosing an element from row/column”, it results that
known as the characteristic polynomial associated with the matrix 𝑨, and of degree . The characteristic polynomial is monic, meaning that the coefficient of the highest power is equal to one. The fundamental theorem of algebra states that of degree has roots, hence has eigenvalues (not necessarily distinct), and associated eigenvectors. This can be stated in matrix form as
with
the eigenvector matrix and eigenvalue matrix, respectively.
The statement , that eigenvector is an invariant direction of the operator along which the effect of operator is scaling by , suggests that similar behavior would be obtained under a coordinate transformation . Assuming is of full rank and introducing , this leads to
Upon coordinate transformation, the eigenvalues (scaling factors along the invariant directions) stay the same. Metric-preserving coordinate transformations are of particular interest, in which case the transformation matrix is unitary .
Since the eigenvalues of are the same, and a polynomial is completely specified by its roots and coefficient of highest power, the characteristic polynomials of must be the same
This can also be verified through the determinant definition
since .
Reflection matrix. The matrix
is the two-dimensional Householder reflector across . Vectors colinear with change direction along the same orientation upon reflection, while vectors orthogonal to (i.e., in the null space ) are unchanged. It is therefore to be expected that , , and , . This is readily verified
Rotation matrix. The matrix
represents the isometric rotation of two-dimensional vectors. If , with eigenvalues , and eigenvector matrix . For , the eigenvalues are , again with eigenvector matrix . If , the orientation of any non-zero changes upon rotation by . The characteristic polynomial has complex roots
and the directions of invariant orientation have complex components (are outside the real plane )
Second-order differentiation matrix. Eigenvalues of matrices arising from discretization of continuum operators can be obtained from the operator eigenproblem. The second-order differentiation operator has eigenvalues associated with eigenfunctions
Sampling of at , , leads to the vector with components . The boundary conditions at the sampling interval end-points affect the eigenvalues. Imposing , at and leads to . The derivative can be approximated at the sample points through
The derivative approximation vector results from a linear mapping and the matrix
has eigenvectors and eigenvalues , . In the limit of an infinite number of sampling points the continuum eigenvalues are obtained, exemplifying again the correspondence principle between discrete and continuum representations
A solution to the eigenvalue problem always exists, but the eigenvectors of do not always form a basis set, i.e., is not always of full rank. The factorized form of the characteristic polynomial of is
with denoting the number of distinct roots of , and is the algebraic multiplicity of eigenvalue , defined as the number of times the root is repeated. Let denote the associated eigenspace . The dimension of denoted by is the geometric multiplicity of eigenvalue . The eigenvector matrix is of full rank when the vector sum of the eigenspaces covers , as established by the following results.
Proof. Let , hence and , hence . Subtraction gives
Since it results that .
Proof. Let be an orthogonal basis for .
If the column vectors of are linearly independent, then is invertible and can be reduced to diagonal form
Diagonal forms are useful in solving linear ODE systems
Also useful in repeatedly applying
When can a matrix be reduced to diagonal form? When eigenvectors are linearly independent such that the inverse of exists
Matrices with distinct eigenvalues are diagonalizable. Consider with eigenvalues for ,
Proof. By contradiction. Take any two eigenvalues and assume that would depend linearly on , for some . Then
and subtracting would give . Since is an eigenvector, hence we obtain a contradiction .
The characteristic polynomial might have repeated roots. Establishing diagonalizability in that case requires additional concepts
Finding eigenvalues as roots of the characteristic polynomial is suitable for small matrices .
analytical root-finding formulas are available only for
small errors in characteristic polynomial coefficients can lead to large errors in roots
Octave/Matlab procedures to find characteristic polynomial
poly(A) function returns the coefficients
roots(p) function computes roots of the polynomial
octave] |
A=[5 -4 2; 5 -4 1; -2 2 -3]; disp(A); |
5 -4 2
5 -4 1
-2 2 -3
octave] |
p=poly(A); disp(p); |
1.00000 2.00000 -1.00000 -2.00000
octave] |
r=roots(p); disp(r'); |
1.0000 -2.0000 -1.0000
octave] |
Find eigenvectors as non-trivial solutions of system
Note convenient choice of row operations to reduce amount of arithmetic, and use of knowledge that is singular to deduce that last row must be null
In traditional form the above row-echelon reduced system corresponds to
In Octave/Matlab the computations are carried out by the null function
octave] |
null(A+5*eye(3))' |
ans = [](0x3)
octave] |
The eigenvalues of are , but small errors in numerical computation can give roots of the characteristic polynomial with imaginary parts
octave> |
lambda=roots(poly(eye(3))); disp(lambda') |
1.00001 - 0.00001i 1.00001 + 0.00001i 0.99999 - 0.00000i
octave> |
In the following example notice that if we slightly perturb (by a quantity less than 0.0005=0.05%), the eigenvalues get perturb by a larger amount, e.g. 0.13%.
octave] |
A=[-2 1 -1; 5 -3 6; 5 -1 4]; disp([eig(A) eig(A+0.001*(rand(3,3)-0.5))]) |
3.0000 + 0.0000i 3.0005 + 0.0000i
-2.0000 + 0.0000i -2.0000 + 0.0161i
-2.0000 + 0.0000i -2.0000 - 0.0161i
octave] |
Extracting eigenvalues and eigenvectors is a commonly encountered operation, and specialized functions exist to carry this out, including the eig function
octave> |
[X,L]=eig(A); disp([L X]); |
-2.00000 0.00000 0.00000 -0.57735 -0.00000 0.57735
0.00000 3.00000 0.00000 0.57735 0.70711 -0.57735
0.00000 0.00000 -2.00000 0.57735 0.70711 -0.57735
octave> |
disp(null(A-3*eye(3))) |
0.00000
0.70711
0.70711
octave> |
disp(null(A+2*eye(3))) |
0.57735
-0.57735
-0.57735
octave> |
Recall definitions of eigenvalue algebraic and geometric multiplicities .
octave> |
A=[-2 1 -1; 5 -3 6; 5 -1 4]; [X,L]=eig(A); disp(L); |
Diagonal Matrix
-2.0000 0 0
0 3.0000 0
0 0 -2.0000
octave> |
disp(X); |
-5.7735e-01 -1.9153e-17 5.7735e-01
5.7735e-01 7.0711e-01 -5.7735e-01
5.7735e-01 7.0711e-01 -5.7735e-01
octave> |
disp(null(A+2*eye(3))); |
0.57735
-0.57735
-0.57735
octave> |
disp(rank(X)) |
octave> |
The SVD is determined by eigendecomposition of , and
, an eigendecomposition of . The columns of are eigenvectors of and called right singular vectors of
, an eigendecomposition of . The columns of are eigenvectors of and called left singular vectors of
The matrix has form
and are the singular values of .
The singular value decomposition (SVD) furnishes complete information about
(the number of non-zero singular values)
are orthogonal basis for the domain and codomain of