Suppose now that admits a unique solution with , , known as a square system of linear equations where the number of unknowns equals that of the equations. How to find it? We are especially interested in constructing a general procedure, that will work no matter what the size of might be. This means we seek an algorithm that precisely specifies the steps that lead to the solution, and that we can program a computing device to carry out automatically. One such algorithm is Gaussian elimination.
Consider the system
The idea is to combine equations such that we have one fewer unknown in each equation. Ask: with what number should the first equation be multiplied in order to eliminate from sum of equation 1 and equation 2? This number is called a Gaussian multiplier, and is in this case . Repeat the question for eliminating from third equation, with multiplier .
Now, ask: with what number should the second equation be multiplied to eliminate from sum of second and third equations. The multiplier is in this case .
Starting from the last equation we can now find , replace in the second to obtain , hence , and finally replace in the first equation to obtain .
The above operations only involve coefficients. A more compact notation is therefore to work with what is known as the "bordered matrix"
Once the above triangular form has been obtain, the solution is found by back substitution, in which we seek to form the identity matrix in the first 3 columns, and the solution is obtained in the last column.
We have introduced Gaussian elimination as a procedure to solve the linear system ("find coordinates of vector in terms of column vectors of matrix "),
We now reinterpret Gaussian elimination as a sequence of matrix multiplications applied to to obtain a simpler, upper triangular form.
Consider the system
with
We ask if there is a matrix that could be multiplied with to produce a result with zeros under the main diagonal in the first column. First, gain insight by considering multiplication by the identity matrix, which leaves unchanged
In the first stage of Gaussian multiplication, the first line remains unchanged, so we deduce that should have the same first line as the identity matrix
Next, recall the way Gaussian multipliers were determined: find number to multiply first line so that added to second, third lines a zero is obtained. This leads to the form
Finally, identify the missing entries with the Gaussian multipliers to determine
Verify by carrying out the matrix multiplication
Repeat the above reasoning to come up with a second matrix that forms a zero under the main diagonal in the second column
We have obtained a matrix with zero entries under the main diagonal (an upper triangular matrix) by a sequence of matrix multiplications.
From the above, we assume that we can form a sequence of multiplier matrices such that the result is an upper triangular matrix
Recall the basic operation in row echelon reduction: constructing a linear combination of rows to form zeros beneath the main diagonal, e.g.
This can be stated as a matrix multiplication operation, with
with , and the matrix obtained after step of row echelon reduction (or, equivalently, Gaussian elimination) is called a Gaussian multiplier matrix.
For nonsingular, the successive steps in row echelon reduction (or Gaussian elimination) correspond to successive multiplications on the left by Gaussian multiplier matrices
The inverse of a Gaussian multiplier is
From obtain
Due to the simple form of the matrix is easily obtained as
We will show that this indeed possible if admits a unique solution. Furthermore, the product of lower triangular matrices is lower triangular, and the inverse of a lower triangular matrix is lower triangular (same applies for upper triangular matrices). Introduce the notation
and obtain
or
The above result permits a basic insight into Gaussian elimination: the procedure depends on "factoring" the matrix into two "simpler" matrices . The idea of representing a matrix as a product of simpler matrices is fundamental to linear algebra, and we will come across it repeatedly.
For now, the factorization allows us to devise the following general approach to solving
Find the factorization
Insert the factorization into to obtain , where the notation has been introduced. The system
is easy to solve by forward substitution to find for given
Finally find by backward substitution solution of
Algorithm Gauss elimination without pivoting
for to
for to
for to
for downto 1
for to
return
Algorithm Gauss elimination with partial pivoting
(initialize row permutation vector)
for to
piv =
for to
mag =
if then
if then break(“Singular matrix”)
for to
for downto 1
for to
return
Given
Singular value decomposition |
Gram-Schmidt |
Lower-upper |
Transformation of coordinates |
|
|
|
|
|
|
, |
y |
|
) |
(back sub to find ) |
|
|
|
By analogy to the simple scalar equation with solution when , we are interested in writing the solution to a linear system as for , . Recall that solving corresponds to expressing the vector as a linear combination of the columns of . This can only be done if the columns of form a basis for , in which case we say that is non-singular.
Computation of the inverse can be carried out by repeated use of Gauss elimination. Denote the inverse by for a moment and consider the inverse matrix property . Introducing the column notation for leads to
and identification of each column in the equality states
with the column unit vector with zero components everywhere except for a in row . To find the inverse we need to simultaneously solve the linear systems given above.
Gauss-Jordan algorithm example. Consider
To find the inverse we solve the systems . This can be done simultaneously by working with the matrix bordered by
Carry out now operations involving linear row combinations and permutations to bring the left side to
to obtain