The special structure of a matrix can be exploited to obtain more efficient factorizations. Evaluation of the linear combination requires floating point operations (flops) for . Evaluation of linear combinations , requires flops. If it is possible to evaluate with fewer operations, the matrix is said to be structured. Examples include:
Banded matrices , if or , with denoting the lower and upper bandwidths. If the matrix is diagonal. If the matrix is said to have bandwidth , i.e., for , the matrix is tridiagonal, and for the matrix is pentadiagonal. Lower triangular matrices have , while upper triangular matrices have . The product requires flops.
Sparse matrices have non-zero elements per row or non-zero elements per column. The product requires or flops
Circulant matrices are sqaure and have , a property that can be exploited to compute using operations
For square, rank-deficient matrices , , can be evaluated in flops
When are symmetric (hence square), requires flops instead of .
Special structure of a matrix is typically associated with underlying symmetries of a particular phenomenon. For example, the law of action and reaction in dynamics (Newton's third law) leads to real symmetric matrices, , . Consider a system of point masses with nearest-neighbor interactions on the real line where the interaction force depends on relative position. Assume that the force exerted by particle on particle is linear
with denoting displacement from an equilibrium position. The law of action and reaction then states that
If the same force law holds at all positions, then
The force on particle is given by the sum of forces from neighboring particles
Introducing , and assuming , the above is stated as
with is a symmetric matrix, , a direct consequence of the law of action and reaction. The matrix is in this case tridiagonal as a consequence of the assumption of nearest-neighbor interactions. Recall that matrices represent linear mappings, hence
with the force-displacement linear mapping, Fig. 1, obtaining the same symmetric, tri-diagonal matrix.
This concept can be extended to complex matrices through , in which case is said to be self-adjoint or hermitian. Again, this property is often associated with desired physical properties, such as the requirement of real observable quantitites in quantum mechanics. Diagonal elements of a hermitian matrix must be real, and for any , the computation
implies for that
hence is real.
The work (i.e., energy stored in the system) done by all the forces in the above point mass system is
and physical considerations state that . This leads the following definitions.
If is hermitian positive definite, then so is for any . Choosing
gives , the principal submatrix of , itself a hermitian positive definite matrix. Choosing shows that the diagonal element of is positive,
The structure of a hermitian positive definite matrix , can be preserved by modification of -factorization. The resulting algorithm is known as Cholesky factorization, and its first stage is stated as
whence . Repeating the stage-1 step
leads to
The resulting Cholesky algorithm is half as expensive as standard -factorization.
Algorithm (Cholesky factorization, )
for
for
The two-dimensional extension of the nearest-neighbor interacting point mass system