![]()
MATH347: Mid-term Review
|
Vectors and matrices
Linear combinations
Matrix operations
Linear transformations
Linear system, Gaussian elimination, similarity transforms, row-echelon form
Matrix inverse, Gauss-Jordan
Vector space, linear dependence, linear independence
Matrix vector spaces
Vector space basis, dimension
Vector space sum, direct sum. Fundamental theorem of linear algebra
Gram-Schmidt orthonormalization
Factorizations: \(L U\), \(Q R\)
![]()
Vectors and matrices
|
A vector is a grouping of \(m\) scalars
An \(m\) by \(n\) matrix is a grouping of \(n\) vectors, \(\)
where each vector has \(m\) scalar components \(\boldsymbol{a}_1, \boldsymbol{a}_2, \ldots, \boldsymbol{a}_n \in S^m\), usually \(\mathbb{R}^m\).
Matrix components with indices taking values \(i \in \{ 1, \ldots, m \}, j \in \{ 1, \ldots, n \}\)
![]()
Linear combinations
|
Linear combination. Let \(\alpha, \beta \in S\), \(\boldsymbol{u}, \boldsymbol{v} \in \mathcal{V}\). Define a linear combination of two vectors by
Linear combination of \(n\) vectors
![]()
Matrix operations: addition scaling
|
The sum of matrices \(\boldsymbol{A}, \boldsymbol{B} \in \mathbb{R}^{m \times n}\)
is the matrix \(\boldsymbol{C}=\boldsymbol{A}+\boldsymbol{B}\) with components \(\)
The scalar multiplication of matrix \(\boldsymbol{A} \in \mathbb{R}^{m \times n}\) by \(\alpha \in \mathbb{R}\) is \(\boldsymbol{B}= \alpha \boldsymbol{A}\)
\((\boldsymbol{A}+\boldsymbol{B}) +\boldsymbol{C}=\boldsymbol{A}+ (\boldsymbol{B}+\boldsymbol{C})\), \(\boldsymbol{A}+\boldsymbol{B}=\boldsymbol{B}+\boldsymbol{A}\)
![]()
Matrix operations: matrix-vector, matrix-matrix multiplication
|
Matrix-vector multiplication=linear combination (\(\boldsymbol{b} \in \mathbb{R}^m, \boldsymbol{A} \in \mathbb{R}^{m \times n}, \boldsymbol{x} \in \mathbb{R}^n\))
Matrix-matrix multiplications \(\boldsymbol{B}=\boldsymbol{A}\boldsymbol{X}\) (\(\boldsymbol{B} \in \mathbb{R}^{m \times p}, \boldsymbol{A} \in \mathbb{R}^{m \times n}, \boldsymbol{X} \in \mathbb{R}^{n \times p}\))
Vectors are single-column matrices, \(\boldsymbol{b} \in \mathbb{R}^m\) is shorthand for \(\boldsymbol{b} \in \mathbb{R}^{m \times 1}\)
Matrix multiplication is associative
Matrix multiplication is not commutative, i.e., there do exist \(\boldsymbol{A}, \boldsymbol{B}\) such that
![]()
Row organization - transposition
|
Grouping of \(m\) scalars into a column vector \(\boldsymbol{v}\) is an arbitrary choice
Introduce transposition to switch between the two groupings
A preferred type of grouping is useful in calculations
Preferred grouping: column vectors such that \(\boldsymbol{v} \in S^m\) is understood to signify a column vector. When explicitly required, write \(\boldsymbol{v} \in S^{m \times 1}\)
Linear combination of row vectors
![]()
“Rows over columns” matrix multiplication rule
|
Matrix-vector multiplication has been introduced as
Matrix-vector multiplication can be seen a “rows over columns rule”
”Rows over columns” also works for components
![]()
Matrix operations: scalar product, norm, transpose
|
Scalar product: \(\boldsymbol{u}, \boldsymbol{v} \in \mathbb{R}^m\), \(\boldsymbol{u}^T \boldsymbol{v}=\boldsymbol{u} \cdot \boldsymbol{v}= u_1 v_1 + u_2 v_2 + \cdots + u_m v_m\)
2-norm: \(\boldsymbol{u} \in \mathbb{R}^m\), \(\| \boldsymbol{u} \| = (\boldsymbol{u}^T \boldsymbol{u})^{1 / 2}\)
Angle between two vectors:
Transpose: swap between rows and columns \(\boldsymbol{A}= \left[ \begin{array}{lll} \boldsymbol{a}_1 & \ldots & \boldsymbol{a}_n \end{array} \right]\)
Linear combination of rows as vector-matrix product
Transposition of product (multiple linear combinations): \((\boldsymbol{A}\boldsymbol{B})^T =\boldsymbol{B}^T \boldsymbol{A}^T\)
![]()
Matrix operations: block operations
|
Matrices often exhibit some intrinsic structure, for example
\(\boldsymbol{A}, \boldsymbol{D} \in \mathbb{R}^{2 m \times 2 n}\), \(\boldsymbol{B}, \boldsymbol{C}, \boldsymbol{E}, \boldsymbol{F} \in \mathbb{R}^{m \times n}\) (other structures possible)
Addition over compatible block dimensions
Multiplication, “row over columns” for compatible block dimensions
Matrix block transposition
![]()
Linear transformations
|
\(T : \mathbb{R}^n \rightarrow \mathbb{R}^m\), mapping of vectors in \(\mathbb{R}^n\) to vectors in \(\mathbb{R}^m\)
Of special interest: linear mappings that preserve linear combinations
Matrix \(\boldsymbol{B}\) of linear transformation
Consider two successive linear mappings \(S : \mathbb{R}^p \rightarrow \mathbb{R}^n\), \(T : \mathbb{R}^n \rightarrow \mathbb{R}^m\)
Linear mapping composition, \(U : \mathbb{R}^p \rightarrow \mathbb{R}^m\), \(U = T \circ S\)
Matrix of composition is matrix product of individual mappings.
![]()
Common linear transformations
|
Stretching, \(T : \mathbb{R}^m \rightarrow \mathbb{R}^m\), \(T (\boldsymbol{v}) =\boldsymbol{A}\boldsymbol{v}\)
Orthogonal projection of \(\boldsymbol{v} \in \mathbb{R}^m\) along \(\boldsymbol{u} \in \mathbb{R}^m\), \(\| \boldsymbol{u} \| = 1\), \(T (\boldsymbol{v}) =\boldsymbol{P}_{\boldsymbol{u}} \boldsymbol{v}\)
Reflection across vector \(\boldsymbol{w} \in \mathbb{R}^m\), \(\| \boldsymbol{w} \| = 1\), \(T (\boldsymbol{v}) =\boldsymbol{R}\boldsymbol{v}\)
Rotation in \(\mathbb{R}^2\)
![]()
Linear systems, Gaussian elimination
|
Component form
Matrix form \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\), \(\boldsymbol{A} \in \mathbb{R}^{m \times n}\), \(\boldsymbol{x} \in \mathbb{R}^n\), \(\boldsymbol{b} \in \mathbb{R}^m\)
Simple systems:
\(\boldsymbol{A}\) diagonal
\(\boldsymbol{A}\) triangular
Solve \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\) by bringing to simpler form. Gaussian elimination: triangular
![]()
Gaussian elimination = sequence of similarity transformations
|
Work with bordered matrix
Can obtain no, unique, infinite solutions
Analyze by bring to reduced row echelon form
All zero rows are below non-zero rows
First non-zero entry on a row is called the leading entry
In each non-zero row, the leading entry is to the left of lower leading entries
Each leading entry equals 1 and is the only non-zero entry in its column
![]()
Gaussian multiplier matrix and its inverse
|
Step \(k\) in Gaussian elimination can be seen as multiplication with
\(\boldsymbol{A} \in \mathbb{R}^{m \times m}\) is invertible if there exists \(\boldsymbol{X} \in \mathbb{R}^{m \times m}\) such that
Notation \(\boldsymbol{X}=\boldsymbol{A}^{- 1}\), is the inverse of \(\boldsymbol{A}\).
![]()
Matrix inverse: existence, computation, operations
|
When does a matrix inverse exist? \(\boldsymbol{A} \in \mathbb{R}^{m \times m}\)
\(\boldsymbol{A}\) invertible
\(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\) has a unique solution for all \(\boldsymbol{b} \in \mathbb{R}^m\)
\(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{0}\) has a unique solution
The reduced row echelon form of \(\boldsymbol{A}\) is \(\boldsymbol{I}\)
\(\boldsymbol{A}\) can be written as product of elementary matrices
\(\boldsymbol{X}\) is inverse if \(\boldsymbol{A}\boldsymbol{X}=\boldsymbol{I}\) or
Gauss-Jordan: similar to Gauss elimination
\((\boldsymbol{A}\boldsymbol{B})^{- 1} =\boldsymbol{B}^{- 1} \boldsymbol{A}^{- 1}\), \((\boldsymbol{A}^T)^{- 1} = (\boldsymbol{A}^{- 1})^T\)
![]()
Vector spaces, linear dependence
|
|
The vectors \(\boldsymbol{a}_1, \boldsymbol{a}_2, \ldots, \boldsymbol{a}_n \in \mathcal{V},\)are linearly dependent if there exist \(n\) scalars, \(x_1, \ldots, x_n \in \mathcal{S}\), at least one of which is different from zero such that
The vectors \(\boldsymbol{a}_1, \boldsymbol{a}_2, \ldots, \boldsymbol{a}_n \in \mathcal{V},\)are linearly independent if the only \(n\) scalars, \(x_1, \ldots, x_n \in \mathcal{S}\), that satisfy
![]()
Vector space basis and dimension, matrix subspaces
|
A set of vectors \(\boldsymbol{u}_1, \ldots, \boldsymbol{u}_n \in \mathcal{V}\) is a basis for vector space \(\mathcal{V}\) if:
\(\boldsymbol{u}_1, \ldots, \boldsymbol{u}_n\) are linearly independent;
\(\operatorname{span} \{ \boldsymbol{u}_1, \ldots, \boldsymbol{u}_n \} =\mathcal{V}\).
The number of vectors \(\boldsymbol{u}_1, \ldots, \boldsymbol{u}_n \in \mathcal{V}\) within a basis is the dimension of the vector space \(\mathcal{V}\).
The column space (or range) of matrix \(\boldsymbol{A} \in \mathbb{R}^{m \times n}\) is the set of vectors reachable by linear combination of the matrix column vectors
Left null space, \(N (\boldsymbol{A}^T) = \{ \boldsymbol{y} \in \mathbb{R}^m | \boldsymbol{A}^T \boldsymbol{y}= 0 \} \subseteq \mathbb{R}^m\), the part of \(\mathbb{R}^m\) not reachable by linear combination of columns of \(\boldsymbol{A}\)
The null space of a matrix \(\boldsymbol{A} \in \mathbb{R}^{m \times n}\) is the set
The row space of \(\boldsymbol{A}\) as
![]()
Direct sum, intersection of vector spaces
|
Given two vector subspaces \((\mathcal{U}, \mathcal{S}, +)\), \((\mathcal{V}, \mathcal{S}, +)\) of the space \((\mathcal{W}, \mathcal{S}, +)\), the sum is the set \(\mathcal{U}+\mathcal{V}= \{ \boldsymbol{u}+\boldsymbol{v} | \boldsymbol{u} \in \mathcal{U}, \boldsymbol{v} \in \mathcal{V} \} .\)
Given two vector subspaces \((\mathcal{U}, \mathcal{S}, +)\), \((\mathcal{V}, \mathcal{S}, +)\) of the space \((\mathcal{W}, \mathcal{S}, +)\), the direct sum is the set \(\mathcal{U} \oplus \mathcal{V}= \{ \boldsymbol{u}+\boldsymbol{v} | \exists !\boldsymbol{u} \in \mathcal{U}, \exists !\boldsymbol{v} \in \mathcal{V} \} .\) (unique decomposition)
Given two vector subspaces \((\mathcal{U}, \mathcal{S}, +)\), \((\mathcal{V}, \mathcal{S}, +)\) of the space \((\mathcal{W}, \mathcal{S}, +)\), the intersection is the set
Two vector subspaces \((\mathcal{U}, \mathcal{S}, +)\), \((\mathcal{V}, \mathcal{S}, +)\) of the space \((\mathcal{W}, \mathcal{S}, +)\) are orthogonal subspaces, denoted \(\mathcal{U} \bot \mathcal{V}\) if \(\boldsymbol{u}^T \boldsymbol{v}= 0\) for any \(\boldsymbol{u} \in \mathcal{U}, \boldsymbol{v} \in \mathcal{V}\).
Two vector subspaces \((\mathcal{U}, \mathcal{S}, +)\), \((\mathcal{V}, \mathcal{S}, +)\) of the space \((\mathcal{W}, \mathcal{S}, +)\) are orthogonal complements, denoted \(\mathcal{U}=\mathcal{V}^{\bot}\), \(\mathcal{V}=\mathcal{U}^{\bot}\) if \(\mathcal{U} \bot \mathcal{V}\) and \(\mathcal{U}+\mathcal{V}=\mathcal{W}\).
Orthogonal complement subspaces form a direct sum \(\mathcal{U}=\mathcal{V}^{\bot}\), \(\mathcal{V}=\mathcal{U}^{\bot} \Rightarrow\)
![]()
FTLA
|
![]()
Gram-Schmidt orthonormalization
|
\(\boldsymbol{A}=\boldsymbol{Q}\boldsymbol{R}, \boldsymbol{Q}\) orthogonal, \(\boldsymbol{R}\) triangular
Identify on both sides to obtain
![]()
Solving \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\)
by Gaussian elimination
|
Gaussian elimination produces a sequence matrices similar to \(\boldsymbol{A} \in \mathbb{R}^{m \times m}\)
Step \(k\) produces zeros underneath diagonal position \((k, k)\)
Step \(k\) can be represented as multiplication by matrix
All \(m - 1\) steps produce an upper triangular matrix
With permutations \(\boldsymbol{P}\boldsymbol{A}= \boldsymbol{L}\boldsymbol{U}\) (Matlab [L,U,P]=lu(A), A=P'*L*U)
![]()
Matrix formulation of Gaussian elimination
|
With known \(L U\)-factorization: \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b} \Rightarrow (\boldsymbol{L}\boldsymbol{U}) \boldsymbol{x}=\boldsymbol{P}\boldsymbol{b} \Rightarrow \boldsymbol{L} (\boldsymbol{U}\boldsymbol{x}) =\boldsymbol{P}\boldsymbol{b}\)
To solve \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\):
Carry out \(L U\)-factorization: \(\boldsymbol{P}^T \boldsymbol{L}\boldsymbol{U}=\boldsymbol{A}\)
Solve \(\boldsymbol{L}\boldsymbol{y}=\boldsymbol{c}=\boldsymbol{P}\boldsymbol{b}\) by forward substitution to find \(\boldsymbol{y}\)
Solve \(\boldsymbol{U}\boldsymbol{x}=\boldsymbol{y}\) by backward substitution
FLOP = floating point operation = one multiplication and one addition
Operation counts: how many FLOPS in each step?
Each \(\boldsymbol{L}_k \boldsymbol{A}^{(k - 1)}\) costs \((m - k)^2\) FLOPS. Overall
Forward substitution step \(k\) costs \(k\) flops
Backward substitution cost is identical \(m (m + 1) / 2 \approx m^2 / 2\)/
![]()
Gram-Schmidt as \(\boldsymbol{Q}\boldsymbol{R}\)
|
Orthonormalization of columns of \(\boldsymbol{A}\) is also a factorization
Operation count:
\(r_{j k} =\boldsymbol{q}_j^T \boldsymbol{a}_k\) costs \(m\) FLOPS
There are \(1 + 2 + \cdots + n\) components in \(\boldsymbol{R}\), Overall cost \(n (n + 1) m / 2\)
With permutations \(\boldsymbol{A}\boldsymbol{P}=\boldsymbol{Q}\boldsymbol{R}\) (Matlab [Q,R,P]=qr(A) )
![]()
Solving linear \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\),
\(\boldsymbol{A} \in \mathbb{R}^{m \times
m}\) systems by \(Q R\)
|
With known \(Q R\)-factorization: \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b} \Rightarrow (\boldsymbol{Q}\boldsymbol{R}\boldsymbol{P}^T) \boldsymbol{x}=\boldsymbol{b} \Rightarrow \boldsymbol{R}\boldsymbol{y}=\boldsymbol{Q}^T \boldsymbol{b}\)
To solve \(\boldsymbol{A}\boldsymbol{x}=\boldsymbol{b}\):
Carry out \(Q R\)-factorization: \(\boldsymbol{Q}\boldsymbol{R}\boldsymbol{P}^T =\boldsymbol{A}\)
Compute \(\boldsymbol{c}=\boldsymbol{Q}^T \boldsymbol{b}\)
Solve \(\boldsymbol{R}\boldsymbol{y}=\boldsymbol{c}\) by backward substitution
Find \(\boldsymbol{x}=\boldsymbol{P}^T \boldsymbol{y}\)
Operation counts: how many FLOPS in each step?
\(Q R\)-factorization \(m^2 (m + 1) / 2 \approx m^3 / 2\)
Compute \(\boldsymbol{c}\), \(m^2\)
Backward substitution \(m (m + 1) / 2 \approx m^2 / 2\)