The basic procedures within factorization can be adapted to account for special structure of the system matrix or to obtain properties associated with .
a square matrix, is the oriented volume enclosed by the column vectors of (a parallelipiped)
Geometric interpretation of determinants
Determinant calculation rules
Algebraic definition of a determinant
giving the (oriented) volume of the parallelepiped spanned by matrix column vectors.
Computation of a determinant with
Computation of a determinant with
Where do these determinant computation rules come from? Two viewpoints
Geometric viewpoint: determinants express parallelepiped volumes
Algebraic viewpoint: determinants are computed from all possible products that can be formed from choosing a factor from each row and each column
![]() |
In two dimensions a “parallelepiped” becomes a parallelogram with area given as
Take as the base, with length . Vector is at angle to -axis, is at angle to -axis, and the angle between , is . The height has length
Use , , ,
The geometric interpretation of a determinant as an oriented volume is useful in establishing rules for calculation with determinants:
Determinant of matrix with repeated columns is zero (since two edges of the parallelepiped are identical). Example for
This is more easily seen using the column notation
Determinant of matrix with linearly dependent columns is zero (since one edge lies in the 'hyperplane' formed by all the others)
Separating sums in a column (similar for rows)
with
Scalar product in a column (similar for rows)
with
Linear combinations of columns (similar for rows)
with .
A determinant of size can be expressed as a sum of determinants of size by expansion along a row or column
The formal definition of a determinant
requires operations, a number that rapidly increases with
A more economical determinant is to use row and column combinations to create zeros and then reduce the size of the determinant, an algorithm reminiscent of Gauss elimination for systems
Example:
The first equality comes from linear combinations of rows, i.e. row 1 is added to row 2, and row 1 multiplied by 2 is added to row 3. These linear combinations maintain the value of the determinant. The second equality comes from expansion along the first column
Consider . We've introduced the idea of a scalar product
in which from two vectors one obtains a scalar
We've also introduced the idea of an exterior product
in which a matrix is obtained from two vectors
Another product of two vectors is also useful, the cross product, most conveniently expressed in determinant-like form
The special structure of a matrix can be exploited to obtain more efficient factorizations. Evaluation of the linear combination requires floating point operations (flops) for . Evaluation of linear combinations , requires flops. If it is possible to evaluate with fewer operations, the matrix is said to be structured. Examples include:
Banded matrices , if or , with denoting the lower and upper bandwidths. If the matrix is diagonal. If the matrix is said to have bandwidth , i.e., for , the matrix is tridiagonal, and for the matrix is pentadiagonal. Lower triangular matrices have , while upper triangular matrices have . The product requires flops.
Sparse matrices have non-zero elements per row or non-zero elements per column. The product requires or flops
Circulant matrices are sqaure and have , a property that can be exploited to compute using operations
For square, rank-deficient matrices , , can be evaluated in flops
When are symmetric (hence square), requires flops instead of .
Special structure of a matrix is typically associated with underlying symmetries of a particular phenomenon. For example, the law of action and reaction in dynamics (Newton's third law) leads to real symmetric matrices, , . Consider a system of point masses with nearest-neighbor interactions on the real line where the interaction force depends on relative position. Assume that the force exerted by particle on particle is linear
with denoting displacement from an equilibrium position. The law of action and reaction then states that
If the same force law holds at all positions, then
The force on particle is given by the sum of forces from neighboring particles
Introducing , and assuming , the above is stated as
with is a symmetric matrix, , a direct consequence of the law of action and reaction. The matrix is in this case tridiagonal as a consequence of the assumption of nearest-neighbor interactions. Recall that matrices represent linear mappings, hence
with the force-displacement linear mapping, Fig. 2, obtaining the same symmetric, tri-diagonal matrix.
This concept can be extended to complex matrices through , in which case is said to be self-adjoint or hermitian. Again, this property is often associated with desired physical properties, such as the requirement of real observable quantitites in quantum mechanics. Diagonal elements of a hermitian matrix must be real, and for any , the computation
implies for that
hence is real.
The work (i.e., energy stored in the system) done by all the forces in the above point mass system is
and physical considerations state that . This leads the following definitions.
If is hermitian positive definite, then so is for any . Choosing
gives , the principal submatrix of , itself a hermitian positive definite matrix. Choosing shows that the diagonal element of is positive,
The structure of a hermitian positive definite matrix , can be preserved by modification of -factorization. The resulting algorithm is known as Cholesky factorization, and its first stage is stated as
whence . Repeating the stage-1 step
leads to
The resulting Cholesky algorithm is half as expensive as standard -factorization.
Algorithm (Cholesky factorization, )
for
for