![]()
MATH347 L17: Determinants
|
New concepts:
Geometric definition: volume of a hyper-parallelipiped
Determinant calculation rules
Algebraic definition: sum over permutations
Example of a bad algorithm: Cramer's rule
![]()
Geometric definition
|
Definition. The determinant of a square matrix
is a real number giving the (oriented) volume of the parallelepiped spanned by matrix column vectors.
![]()
Determinants of ,
size matrices
|
Computation of a determinant with
Computation of a determinant with
Where do these determinant computation rules come from? Three viewpoints
Geometry viewpoint: determinants express parallelepiped volumes
Algebra viewpoint: determinants are computed from all possible products that can be formed from choosing a factor from each row and each column
Function viewpoint: the determinant is a function satisfying certain properties (see Textbook Definition 3.2.1).
![]()
Determinant in 2D gives area of parallelogram
|
In two dimensions a “parallelepiped” becomes a parallelogram with area given as
Take as the base, with length . Vector is at angle to -axis, is at angle to -axis, and the angle between , is . The height has length
Use , , ,
![]()
Determinant in 3D gives volume of parallelepiped
|
,
The volume is (area of base) (height) and given as the value of the determinant
![]()
Determinants measure mapping deformation of unit hypercube
|
Recall that defines a linear mapping
Observe
Since , interpret above to mean that measures the ratio of the volume of the hyperparallelipiped w.r.t. unit hypercube
From above, observe that the determinant can be used to “measure” both a matrix and a linear mapping.
![]()
Determinant calculations
|
The geometric interpretation of a determinant as an oriented volume is useful in establishing rules for calculation with determinants:
Determinant of matrix with repeated columns is zero (since two edges of the parallelepiped are identical). Example for
This is more easily seen using the column notation
Determinant of matrix with linearly dependent columns is zero (since one edge lies in the 'hyperplane' formed by all the others)
![]()
Determinant calculation rules
|
Separating sums in a column (similar for rows). With :
Scalar product in a column (similar for rows). With :
Linear combinations of columns (similar for rows). With :
Swapping two columns (or rows) changes determinant sign
![]()
Minors and cofactors
|
For , the -minor of , , is the determinant obtained by deleting row and column
The -cofactor of is
Minors and cofactors are useful in determinant calculations
![]()
Determinant expansion
|
A determinant of size can be expressed as a sum of determinants of size by expansion along a row or column
Expressed as cofactors
In general
![]()
Determinant calculations by Gaussian elimination
|
Determinant rule
means that Gaussian elimination operations can be used to compute
Approach:
carry out linear combination of rows to reduce to triangular form
The determinant of a triangular matrix is simply the product of its diagonal elements due to cofactor expansion
The above leads to FLOPs computations for a determinant
![]()
Determinant of transpose, matrix product, non-full rank matrices
|
Proof: After presentation of SVD
Proof: After presentation of SVD
, if then
Motivation: Some of the column vectors are linearly dependent
![]()
Algebraic definition of determinant
|
The formal definition of a determinant
requires operations, a number that rapidly increases with
A more economical determinant is to use row and column combinations to create zeros and then reduce the size of the determinant, an algorithm reminiscent of Gauss elimination for systems
Example:
The first equality comes from linear combinations of rows, i.e. row 1 is added to row 2, and row 1 multiplied by 2 is added to row 3. These linear combinations maintain the value of the determinant. The second equality comes from expansion along the first column
![]()
Cramer's rule to solve
|
We've seen examples of “good” algorithms to solve a linear system (, )
The above find a solution using , FLOPS
Consider now an example of a “bad” algorithm, known as Cramer's rule
is the principal determinant of the linear system
arises from minors of
Why bad? It requires computation of determinants, each of which costs leading to overall
Could be even worse! Before ubiquity of computers, determinants were often computed by the algebraic definition
There are terms in the sum. Each term costs flops. Using this in Cramer's rule gives an incredibly large number for even small .