Suppose now that admits a unique solution. How to find it? We are especially interested in constructing a general procedure, that will work no matter what the size of might be. This means we seek an algorithm that precisely specifies the steps that lead to the solution, and that we can program a computing device to carry out automatically. One such algorithm is Gaussian elimination.
Consider the system
The idea is to combine equations such that we have one fewer unknown in each equation. Ask: with what number should the first equation be multiplied in order to eliminate from sum of equation 1 and equation 2? This number is called a Gaussian multiplier, and is in this case . Repeat the question for eliminating from third equation, with multiplier .
Now, ask: with what number should the second equation be multiplied to eliminate from sum of second and third equations. The multiplier is in this case .
Starting from the last equation we can now find , replace in the second to obtain , hence , and finally replace in the first equation to obtain .
The above operations only involve coefficients. A more compact notation is therefore to work with what is known as the "bordered matrix"
Once the above triangular form has been obtain, the solution is found by back substitution, in which we seek to form the identity matrix in the first 3 columns, and the solution is obtained in the last column.
We have introduced Gaussian elimination as a procedure to solve the linear system ("find coordinates of vector in terms of column vectors of matrix "),
We now reinterpret Gaussian elimination as a sequence of matrix multiplications applied to to obtain a simpler, upper triangular form.
Consider the system
with
We ask if there is a matrix that could be multiplied with to produce a result with zeros under the main diagonal in the first column. First, gain insight by considering multiplication by the identity matrix, which leaves unchanged
In the first stage of Gaussian multiplication, the first line remains unchanged, so we deduce that should have the same first line as the identity matrix
Next, recall the way Gaussian multipliers were determined: find number to multiply first line so that added to second, third lines a zero is obtained. This leads to the form
Finally, identify the missing entries with the Gaussian multipliers to determine
Verify by carrying out the matrix multiplication
Repeat the above reasoning to come up with a second matrix that forms a zero under the main diagonal in the second column
We have obtained a matrix with zero entries under the main diagonal (an upper triangular matrix) by a sequence of matrix multiplications.
From the above, we assume that we can form a sequence of multiplier matrices such that the result is an upper triangular matrix
Recall the basic operation in row echelon reduction: constructing a linear combination of rows to form zeros beneath the main diagonal, e.g.
This can be stated as a matrix multiplication operation, with
with , and the matrix obtained after step of row echelon reduction (or, equivalently, Gaussian elimination) is called a Gaussian multiplier matrix.
For nonsingular, the successive steps in row echelon reduction (or Gaussian elimination) correspond to successive multiplications on the left by Gaussian multiplier matrices
The inverse of a Gaussian multiplier is
From obtain
Due to the simple form of the matrix is easily obtained as
We will show that this indeed possible if admits a unique solution. Furthermore, the product of lower triangular matrices is lower triangular, and the inverse of a lower triangular matrix is lower triangular (same applies for upper triangular matrices). Introduce the notation
and obtain
or
The above result permits a basic insight into Gaussian elimination: the procedure depends on "factoring" the matrix into two "simpler" matrices . The idea of representing a matrix as a product of simpler matrices is fundamental to linear algebra, and we will come across it repeatedly.
For now, the factorization allows us to devise the following general approach to solving
Find the factorization
Insert the factorization into to obtain , where the notation has been introduced. The system
is easy to solve by forward substitution to find for given
Finally find by backward substitution solution of
Algorithm Gauss elimination without pivoting
for to
for to
for to
for downto 1
for to
return
Algorithm Gauss elimination with partial pivoting
(initialize row permutation vector)
for to
piv =
for to
mag =
if then
if then break(“Singular matrix”)
for to
for downto 1
for to
return
Given
Singular value decomposition |
Gram-Schmidt |
Lower-upper |
Transformation of coordinates |
|
|
|
|
|
|
, |
y |
|
) |
(back sub to find ) |
|
|
|
By analogy to the simple scalar equation with solution when , we are interested in writing the solution to a linear system as for , . Recall that solving corresponds to expressing the vector as a linear combination of the columns of . This can only be done if the columns of form a basis for , in which case we say that is non-singular.
Computation of the inverse can be carried out by repeated use of Gauss elimination. Denote the inverse by for a moment and consider the inverse matrix property . Introducing the column notation for leads to
and identification of each column in the equality states
with the column unit vector with zero components everywhere except for a in row . To find the inverse we need to simultaneously solve the linear systems given above.
Gauss-Jordan algorithm example. Consider
To find the inverse we solve the systems . This can be done simultaneously by working with the matrix bordered by
Carry out now operations involving linear row combinations and permutations to bring the left side to
to obtain
a square matrix, is the oriented volume enclosed by the column vectors of (a parallelipiped)
Geometric interpretation of determinants
Determinant calculation rules
Algebraic definition of a determinant
giving the (oriented) volume of the parallelepiped spanned by matrix column vectors.
Computation of a determinant with
Computation of a determinant with
Where do these determinant computation rules come from? Two viewpoints
Geometric viewpoint: determinants express parallelepiped volumes
Algebraic viewpoint: determinants are computed from all possible products that can be formed from choosing a factor from each row and each column
![]() |
In two dimensions a “parallelepiped” becomes a parallelogram with area given as
Take as the base, with length . Vector is at angle to -axis, is at angle to -axis, and the angle between , is . The height has length
Use , , ,
The geometric interpretation of a determinant as an oriented volume is useful in establishing rules for calculation with determinants:
Determinant of matrix with repeated columns is zero (since two edges of the parallelepiped are identical). Example for
This is more easily seen using the column notation
Determinant of matrix with linearly dependent columns is zero (since one edge lies in the 'hyperplane' formed by all the others)
Separating sums in a column (similar for rows)
with
Scalar product in a column (similar for rows)
with
Linear combinations of columns (similar for rows)
with .
A determinant of size can be expressed as a sum of determinants of size by expansion along a row or column
The formal definition of a determinant
requires operations, a number that rapidly increases with
A more economical determinant is to use row and column combinations to create zeros and then reduce the size of the determinant, an algorithm reminiscent of Gauss elimination for systems
Example:
The first equality comes from linear combinations of rows, i.e. row 1 is added to row 2, and row 1 multiplied by 2 is added to row 3. These linear combinations maintain the value of the determinant. The second equality comes from expansion along the first column
Consider . We've introduced the idea of a scalar product
in which from two vectors one obtains a scalar
We've also introduced the idea of an exterior product
in which a matrix is obtained from two vectors
Another product of two vectors is also useful, the cross product, most conveniently expressed in determinant-like form