This assignment is a worksheet of exercises intended as preparation for
the Final Examination. You should:
-
State \(\boldsymbol{P} \in \mathbb{R}^{3 \times 3}\) that permutes
rows (1,2,3) of \(\boldsymbol{A} \in \mathbb{R}^{3 \times 3}\) as
rows (2,3,1) through the product \(\boldsymbol{P}\boldsymbol{A}\).
Solution. Permute rows of the identity matrix to
obtain
\(\displaystyle \boldsymbol{P}= \left[ \begin{array}{lll}
0 & 1
& 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{array} \right] .\)
-
Find the inverse of matrix \(\boldsymbol{P}\) from Ex. 1.
Solution. \(\boldsymbol{P}\) is an orthogonal
matrix, hence the inverse is given by its transpose
\(\displaystyle \boldsymbol{P}^{- 1} =\boldsymbol{P}^T = \left[
\begin{array}{lll}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 &
0
\end{array} \right]\)
-
State \(\boldsymbol{Q} \in \mathbb{R}^{3 \times 3}\) that permutes
columns (1,2,3) of \(\boldsymbol{A} \in \mathbb{R}^{3 \times 3}\) as
columns (3,1,2) through the product
\(\boldsymbol{A}\boldsymbol{Q}\).
Solution. Permute columns of the identity matrix to
obtain
\(\displaystyle \boldsymbol{Q}= \left[ \begin{array}{lll}
0 & 1
& 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{array} \right] .\)
-
Find the inverse of marix \(\boldsymbol{Q}\) from Ex. 3.
Solution. \(\boldsymbol{Q}\) is an orthogonal
matrix, hence the inverse is given by its transpose
\(\displaystyle \boldsymbol{Q}^{- 1} =\boldsymbol{Q}^T = \left[
\begin{array}{lll}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 &
0
\end{array} \right]\)
-
Find the \(L U\) factorization of
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{lll}
1 & 1 &
1\\
1 & 2 & 3\\
1 & 3 & 6
\end{array} \right] .\)
Solution. Stage 1 multiplier matrix operation gives
\(\displaystyle \boldsymbol{L}_1 \boldsymbol{A}= \left[
\begin{array}{lll}
1 & 0 & 0\\
- 1 & 1 & 0\\
- 1 & 0 &
1
\end{array} \right] \left[ \begin{array}{lll}
1 & 1 & 1\\
1 &
2 & 3\\
1 & 3 & 6
\end{array} \right] = \left[ \begin{array}{lll}
1 & 1 & 1\\
0 & 1 & 2\\
0 & 2 & 5
\end{array} \right]\)
Stage 2 multiplier matrix operation gives
\(\displaystyle \boldsymbol{L}_2 \boldsymbol{L}_1 \boldsymbol{A}=
\left[ \begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & - 2 &
1
\end{array} \right] \left[ \begin{array}{lll}
1 & 1 & 1\\
0 &
1 & 2\\
0 & 2 & 5
\end{array} \right] = \left[ \begin{array}{lll}
1 & 1 & 1\\
0 & 1 & 2\\
0 & 0 & 1
\end{array} \right]
=\boldsymbol{U}.\)
Multiplication by multiplier matrix inverses:
\(\displaystyle \boldsymbol{A}=\boldsymbol{L}_1^{- 1}
\boldsymbol{L}_2^{- 1}
\boldsymbol{U}=\boldsymbol{L}
\boldsymbol{U}\)
with
\(\displaystyle \boldsymbol{L}=\boldsymbol{L}_1^{- 1}
\boldsymbol{L}_2^{- 1} = \left[
\begin{array}{lll}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 0 & 1
\end{array} \right] \left[
\begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 2 &
1
\end{array} \right] = \left[ \begin{array}{lll}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 2 & 1
\end{array} \right] .\)
-
Find the \(L U\) factorization of
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{lll}
1 & 1 &
1\\
1 & 2 & 2\\
1 & 2 & 3
\end{array} \right] .\)
Solution. Stage 1 multiplier matrix operation gives
\(\displaystyle \boldsymbol{L}_1 \boldsymbol{A}= \left[
\begin{array}{lll}
1 & 0 & 0\\
- 1 & 1 & 0\\
- 1 & 0 &
1
\end{array} \right] \left[ \begin{array}{lll}
1 & 1 & 1\\
1 &
2 & 2\\
1 & 2 & 3
\end{array} \right] = \left[ \begin{array}{lll}
1 & 1 & 1\\
0 & 1 & 1\\
0 & 1 & 2
\end{array} \right]\)
Stage 2 multiplier matrix operation gives
\(\displaystyle \boldsymbol{L}_2 \boldsymbol{L}_1 \boldsymbol{A}=
\left[ \begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & - 1 &
1
\end{array} \right] \left[ \begin{array}{lll}
1 & 1 & 1\\
0 &
1 & 1\\
0 & 1 & 2
\end{array} \right] = \left[ \begin{array}{lll}
1 & 1 & 1\\
0 & 1 & 1\\
0 & 0 & 1
\end{array} \right]
=\boldsymbol{U}.\)
Multiplication by multiplier matrix inverses:
\(\displaystyle \boldsymbol{A}=\boldsymbol{L}_1^{- 1}
\boldsymbol{L}_2^{- 1}
\boldsymbol{U}=\boldsymbol{L}
\boldsymbol{U}\)
with
\(\displaystyle \boldsymbol{L}=\boldsymbol{L}_1^{- 1}
\boldsymbol{L}_2^{- 1} = \left[
\begin{array}{lll}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 0 & 1
\end{array} \right] \left[
\begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 1 &
1
\end{array} \right] = \left[ \begin{array}{lll}
1 & 0 & 0\\
1 & 1 & 0\\
1 & 1 & 1
\end{array} \right] .\)
-
Prove that permutation matrices \(\boldsymbol{P}, \boldsymbol{Q}\)
from Ex.1,3 are orthogonal matrices.
Solution. Verify that
\(\boldsymbol{P}^T\)\(\boldsymbol{P}=\boldsymbol{I}\) ,
\(\boldsymbol{Q}^T\)\(\boldsymbol{Q}=\boldsymbol{I}\), use
\(\boldsymbol{P}=\boldsymbol{Q}\).
\(\displaystyle \left[ \begin{array}{lll}
0 & 1 & 0\\
0 & 0 &
1\\
1 & 0 & 0
\end{array} \right] \left[ \begin{array}{lll}
0
& 0 & 1\\
1 & 0 & 0\\
0 & 1 & 0
\end{array} \right] = \left[
\begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 &
1
\end{array} \right]\)
-
Find the \(Q R\) factorization of
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{lll}
0 & 5 &
6\\
0 & 0 & 9\\
1 & 2 & 3
\end{array} \right] .\)
Solution. A row permutation brings
\(\boldsymbol{A}\) to upper triangular form
\(\displaystyle \boldsymbol{P}\boldsymbol{A}= \left[
\begin{array}{lll}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 &
0
\end{array} \right] \left[ \begin{array}{lll}
0 & 5 & 6\\
0 &
0 & 9\\
1 & 2 & 3
\end{array} \right] = \left[ \begin{array}{lll}
1 & 2 & 3\\
0 & 5 & 6\\
0 & 0 & 9
\end{array} \right]
=\boldsymbol{R}\)
Since \(\boldsymbol{P}\) is orthogonal, so is it's inverse and
obtain
\(\displaystyle \boldsymbol{A}=\boldsymbol{Q}\boldsymbol{R},
\boldsymbol{Q}=\boldsymbol{P}^T .\)
-
Find the eigendecomposition of \(\boldsymbol{R} \in \mathbb{R}^{2
\times 2}\), the matrix of reflection across the first bisector (the
\(x = y\) line).
Solution. A unit vector along the first bisector is
\(\displaystyle \boldsymbol{q}_1 = \frac{1}{\sqrt{2}} \left[
\begin{array}{l}
1\\
1
\end{array} \right],\)
and a unit vector orthgonal to the first bisector is
\(\displaystyle \boldsymbol{q}_2 = \frac{1}{\sqrt{2}} \left[
\begin{array}{l}
- 1\\
1
\end{array} \right],\)
Reflection across the first bisector of the two vectors above yields
\(\displaystyle \boldsymbol{R}\boldsymbol{q}_1
=\boldsymbol{q}_1,
\boldsymbol{R}\boldsymbol{q}_2 = 0,\)
hence
\(\displaystyle \boldsymbol{R} \left[ \begin{array}{ll}
\boldsymbol{q}_1 & \boldsymbol{q}_2
\end{array} \right] = \left[
\begin{array}{ll}
\boldsymbol{q}_1 & \boldsymbol{q}_2
\end{array}
\right] \left[ \begin{array}{ll}
1 & 0\\
0 & 0
\end{array}
\right]
\Rightarrow
\boldsymbol{R}\boldsymbol{Q}=\boldsymbol{Q}\boldsymbol{\Lambda}
\Rightarrow
\boldsymbol{R}=\boldsymbol{Q}\boldsymbol{\Lambda}\boldsymbol{Q}^T,\)
the requested (orthogonal) eigendecomposition
-
Find the SVD of \(\boldsymbol{R} \in \mathbb{R}^{2 \times 2}\), the
rotation by angle \(\theta\) matrix.
Solution. Rotation does not change the norm of a
vector hence in the SVD
\(\boldsymbol{R}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T\)
identify \(\boldsymbol{\Sigma}=\boldsymbol{I}\). Write
\(\displaystyle
\boldsymbol{R}=\boldsymbol{R}\boldsymbol{I}\boldsymbol{I}\)
and identify \(\boldsymbol{U}=\boldsymbol{R}\) (an orthogonal
matrix), \(\boldsymbol{\Sigma}=\boldsymbol{I}\) (a diagonal matrix
with ordered elements), \(\boldsymbol{V}^T =\boldsymbol{I}\) and
orthogonal matrix, the requested SVD.
-
Find the coordinates of \(\boldsymbol{b}= \left[ \begin{array}{lll}
6 & 15 & 24
\end{array} \right]^T\) on the \(\mathbb{R}^3\) basis
vectors
\(\displaystyle \left\{ \left[ \begin{array}{l}
1\\
4\\
7
\end{array} \right], \left[ \begin{array}{l}
2\\
5\\
8
\end{array} \right], \left[ \begin{array}{l}
3\\
6\\
9
\end{array} \right] \right\} .\)
Solution. The above requests solution of system
\(\displaystyle \boldsymbol{A}\boldsymbol{x}= \left[
\begin{array}{lll}
1 & 2 & 3\\
4 & 5 & 6\\
7 & 8 &
9
\end{array} \right] \boldsymbol{x}=\boldsymbol{b}= \left[
\begin{array}{l}
6\\
15\\
24
\end{array} \right] .\)
By inspection, find
\(\displaystyle \boldsymbol{x}= \left[ \begin{array}{l}
1\\
1\\
1
\end{array} \right] .\)
-
Solve the least squares problem \(\min_{\boldsymbol{x}} \|
\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x} \|\) for
\(\displaystyle \boldsymbol{b}= \left[ \begin{array}{l}
1\\
2\\
3
\end{array} \right], \boldsymbol{A}= \left[ \begin{array}{ll}
3
& - 5\\
- 11 & 21\\
0 & 0
\end{array} \right] .\)
Solution. Linear combinations of columns of
\(\boldsymbol{A}= \left[ \begin{array}{ll}
\boldsymbol{a}_1 &
\boldsymbol{a}_2
\end{array} \right]\) lead to a zero component in
the \(x_3\) direction. The best approximation of \(\boldsymbol{b}\)
exactly recovers the risft two components
\(\displaystyle x_1 \left[ \begin{array}{l}
3\\
-
11
\end{array} \right] + x_2 \left[ \begin{array}{l}
- 5\\
21
\end{array} \right] = \left[ \begin{array}{l}
1\\
2
\end{array} \right],\)
with solution \(x_1 = 3.875 = 3 \frac{7}{8}\), \(x_2 = 2.125 = 2
\frac{1}{8}\).
-
Find the line passing closest to points \(\mathcal{D}= \{ (- 2, 3),
(- 1, 1), (0, 1), (1, 3), (3, 7) \}\).
Solution. Form vectors
\(\displaystyle \boldsymbol{x}= \left[ \begin{array}{l}
- 2\\
-
1\\
0\\
1\\
3
\end{array} \right], \boldsymbol{y}= \left[
\begin{array}{l}
3\\
1\\
1\\
3\\
7
\end{array} \right],\)
and solve least squares problem \(\min_{\boldsymbol{c}} \|
\boldsymbol{A}\boldsymbol{c}-\boldsymbol{y} \|\) to find line \(y
(x) = c_0 + c_1 x\).
\(\displaystyle \min_{\boldsymbol{c}} \left\| \left[
\begin{array}{ll}
\boldsymbol{x}^0 & \boldsymbol{x}^1
\end{array}
\right] \left[ \begin{array}{l}
c_0\\
c_1
\end{array} \right]
-\boldsymbol{y} \right\|, \boldsymbol{A}= \left[
\begin{array}{l}
1\\
1\\
1\\
1\\
1
\end{array} \begin{array}{l}
- 2\\
-
1\\
0\\
1\\
3
\end{array} \right] = \left[ \begin{array}{ll}
\boldsymbol{a}_1 & \boldsymbol{a}_2
\end{array} \right] .\)
The error vector
\(\boldsymbol{e}=\boldsymbol{y}-\boldsymbol{A}\boldsymbol{c}\) is
minimized when \(\boldsymbol{e}\) is orthogonal to
\(\begin{array}{ll}
\boldsymbol{a}_1 &
\boldsymbol{a}_2
\end{array}\),
\(\displaystyle \boldsymbol{A}^T
\boldsymbol{e}=\boldsymbol{A}^T
(\boldsymbol{y}-\boldsymbol{A}\boldsymbol{c})
= 0,\)
leading to the normal system
\(\displaystyle (\boldsymbol{A}^T
\boldsymbol{A})
\boldsymbol{c}=\boldsymbol{M}\boldsymbol{c}=\boldsymbol{A}^T
\boldsymbol{y}=\boldsymbol{d}.\)
Compute
\(\displaystyle \boldsymbol{M}=\boldsymbol{A}^T \boldsymbol{A}=
\left[ \begin{array}{lllll}
1 & 1 & 1 & 1 & 1\\
- 2 & - 1 & 0 &
1 & 3
\end{array} \right] \left[ \begin{array}{l}
1\\
1\\
1\\
1\\
1
\end{array} \begin{array}{l}
- 2\\
- 1\\
0\\
1\\
3
\end{array} \right] = \left[ \begin{array}{ll}
6 & 1\\
1 &
15
\end{array} \right], \boldsymbol{d}= \left[ \begin{array}{lllll}
1 & 1 & 1 & 1 & 1\\
- 2 & - 1 & 0 & 1 & 3
\end{array} \right]
\left[ \begin{array}{l}
3\\
1\\
1\\
3\\
7
\end{array}
\right] = \left[ \begin{array}{l}
15\\
17
\end{array} \right]\)
with solution \(c_0 = 208 / 89\), \(c_1 = 87 / 89\).
-
Find an orthonormal basis for \(C (\boldsymbol{A})\) where
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{ll}
1 & -
2\\
1 & 0\\
1 & 1\\
1 & 3
\end{array} \right] .\)
Solution. With \(\boldsymbol{A}= \left[
\begin{array}{ll}
\boldsymbol{a}_1 & \boldsymbol{a}_2
\end{array}
\right]\), find
\(\displaystyle \boldsymbol{q}_1 =\boldsymbol{a}_1 / \|
\boldsymbol{a}_1 \| = \frac{1}{2}
\left[ \begin{array}{l}
1\\
1\\
1\\
1
\end{array} \right] .\)
Subtract component of \(\boldsymbol{a}_2\) along direction of
\(\boldsymbol{q}_1\)
\(\displaystyle \boldsymbol{v}_2 =\boldsymbol{a}_2 -
(\boldsymbol{q}_1^T \boldsymbol{a}_2)
\boldsymbol{q}_1 = \left[
\begin{array}{l}
- 2\\
0\\
1\\
3
\end{array} \right] -
\frac{1}{2} \left[ \begin{array}{l}
1\\
1\\
1\\
1
\end{array} \right] = \frac{1}{2} \left[ \begin{array}{l}
- 5\\
- 1\\
1\\
5
\end{array} \right]\)
Divide by norm to obtain second orthonormal vector
\(\displaystyle \boldsymbol{q}_2 =\boldsymbol{v}_2 / \|
\boldsymbol{v}_2 \| = \frac{1}{2
\sqrt{13}} \left[
\begin{array}{l}
- 5\\
- 1\\
1\\
5
\end{array} \right] .\)
-
With \(\boldsymbol{A}\) from Ex. 4 solve the least squares problem
\(\min_{\boldsymbol{x}} \|
\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x} \|\) where
\(\displaystyle \boldsymbol{b}= \left[ \begin{array}{l}
- 4\\
-
3\\
3\\
0
\end{array} \right] .\)
Solution. With \(\boldsymbol{Q}= \left[
\begin{array}{ll}
\boldsymbol{q}_1 & \boldsymbol{q}_2
\end{array}
\right]\) computed above, find the projection of \(\boldsymbol{b}\)
onto \(C (\boldsymbol{A})\)
\(\displaystyle \boldsymbol{c}=\boldsymbol{Q}\boldsymbol{Q}^T
\boldsymbol{b}=\boldsymbol{Q}
(\boldsymbol{Q}^T \boldsymbol{b})\)
The vector \(\boldsymbol{c}\) is within \(C (\boldsymbol{A}) = C
(\boldsymbol{Q})\), hence
\(\boldsymbol{c}=\boldsymbol{A}\boldsymbol{x}=\boldsymbol{Q}\boldsymbol{R}\boldsymbol{x}\).
Find solution of
\(\displaystyle \boldsymbol{R}\boldsymbol{x}=\boldsymbol{Q}^T
\boldsymbol{b}\)
with solution
\(\displaystyle \boldsymbol{x}= \left[ \begin{array}{l}
- 4\\
11
\end{array} \right]\)
-
What is the best approximant \(\boldsymbol{c} \in C
(\boldsymbol{A})\) (\(\boldsymbol{A}\) from Ex. 4) of
\(\boldsymbol{b}\) from Ex. 5?
Solution. See above for calculation of
\(\boldsymbol{c}\).
-
Find the eigenvalues and eigenvectors of
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{ll}
2 & -
1\\
- 1 & 2
\end{array} \right] .\)
Solution. The characteristic polynomial
\(\displaystyle p (\lambda) = \det (\lambda \boldsymbol{I}- A) =
\left| \begin{array}{ll}
\lambda - 2 & 1\\
1 & \lambda -
2
\end{array} \right| = \lambda^2 - 4 \lambda + 3 = (\lambda - 3)
(\lambda - 1)\)
with roots (the eigenvalues) \(\lambda_1 = 3\), \(\lambda_2 = 1\).
Find eigenvectors \(\boldsymbol{x}_1\), \(\boldsymbol{x}_2\) from
basis nor null spaces of \(\boldsymbol{A}- \lambda_{1, 2}
\boldsymbol{I}\)
\(\displaystyle \left[ \begin{array}{ll}
- 1 & - 1\\
- 1 & -
1
\end{array} \right] \sim \left[ \begin{array}{ll}
- 1 & - 1\\
0 & 0
\end{array} \right] \Rightarrow \boldsymbol{x}_1 = \left[
\begin{array}{l}
1\\
- 1
\end{array} \right]\)
\(\displaystyle \left[ \begin{array}{ll}
1 & - 1\\
- 1 &
1
\end{array} \right] \sim \left[ \begin{array}{ll}
1 & - 1\\
0 & 0
\end{array} \right] \Rightarrow \boldsymbol{x}_2 = \left[
\begin{array}{l}
1\\
1
\end{array} \right] .\)
-
For \(\boldsymbol{A}\) from Ex. 7 find the eigenvalues and
eigenvectors of \(\boldsymbol{A}^2\), \(\boldsymbol{A}^{- 1}\),
\(\boldsymbol{A}+ 2\boldsymbol{I}\).
Solution. With \(\boldsymbol{A}\boldsymbol{x}=
\lambda \boldsymbol{x}\) compute \(\boldsymbol{A}^2
\boldsymbol{x}=\boldsymbol{A}
(\boldsymbol{A}\boldsymbol{x})
=\boldsymbol{A} (\lambda
\boldsymbol{x}) = \lambda^2 \boldsymbol{x}\), hence
\(\boldsymbol{A}^2\) has same eigenvectors as \(\boldsymbol{A}\) and
eigenvalues \(\lambda_1^2 = 9\), \(\lambda_2 = 1\).
Multiply \(\boldsymbol{A}\boldsymbol{x}= \lambda \boldsymbol{x}\) by
\(\boldsymbol{A}^{- 1}\) to find
\(\displaystyle \boldsymbol{A}^{- 1} \boldsymbol{x}=
\frac{1}{\lambda} \boldsymbol{x},\)
hence \(\boldsymbol{A}^{- 1}\) has same eigenvectors as
\(\boldsymbol{A}\), with eigenvalues \(\lambda_1 = 1 / 3\),
\(\lambda_2 = 1.\)
Compute
\(\displaystyle (\boldsymbol{A}+ 2\boldsymbol{I}) \boldsymbol{x}=
(\lambda + 2) \boldsymbol{x}\)
and find that \(\boldsymbol{A}+ 2\boldsymbol{I}\) has the sam
eigenvectors as \(\boldsymbol{A}\) with eigenvalues \(\lambda_1 =
5\), \(\lambda_2 = 3\).
-
Is the following matrix diagonalizable?
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{lll}
1 & 1 &
0\\
0 & 1 & 1\\
0 & 0 & 1
\end{array} \right] .\)
Solution. The characteristic polynomial is
\(\displaystyle p (\lambda) = \det (\lambda
\boldsymbol{I}-\boldsymbol{A}) = \left|
\begin{array}{lll}
\lambda
- 1 & - 1 & 0\\
0 & \lambda - 1 & - 1\\
0 & 0 & \lambda -
1
\end{array} \right| = (\lambda - 1)^3\)
with eigenvalues \(\lambda = 1\) having an algebraic multiplicity of
3. Find a basis of null space of \(\boldsymbol{A}- \lambda
\boldsymbol{I}\)
\(\displaystyle \boldsymbol{A}- \lambda \boldsymbol{I}= \left[
\begin{array}{lll}
0 & 1 & 0\\
0 & 0 & 1\\
0 & 0 &
0
\end{array} \right] .\)
From above the null space of \(\boldsymbol{A}- \lambda
\boldsymbol{I}\) has dimension of 2 (geometric multiplicity), less
than the algebraic multiplicity, and \(\boldsymbol{A}\) is not
diagonalizable.
-
Find the SVD of
\(\displaystyle \boldsymbol{A}= \left[ \begin{array}{ll}
1 & 2\\
2 & 4
\end{array} \right] .\)
Solution. The matrix \(\boldsymbol{A}= \left[
\begin{array}{ll}
\boldsymbol{a}_1 & \boldsymbol{a}_2
\end{array}
\right]\) has \(\boldsymbol{a}_2 = 2\boldsymbol{a}_1\) and therefore
\(\operatorname{rank} (\boldsymbol{A}) = 1\). Characteristic
polynomial of \(\boldsymbol{A}^T \boldsymbol{A}\) is
\(\displaystyle \boldsymbol{A}^T \boldsymbol{A}= \left[
\begin{array}{ll}
1 & 2\\
2 & 4
\end{array} \right] \left[
\begin{array}{ll}
1 & 2\\
2 & 4
\end{array} \right] = \left[
\begin{array}{ll}
5 & 10\\
10 & 20
\end{array} \right], p
(\lambda) = \det (\lambda
\boldsymbol{I}-\boldsymbol{A}^T
\boldsymbol{A}) = \left| \begin{array}{ll}
\lambda - 5 & - 10\\
- 10 & \lambda - 20
\end{array} \right| = \lambda^2 - 25 \lambda\)
has roots \(\lambda_1 = 25\), \(\lambda_2 = 0\) and the singular
values of \(\boldsymbol{A}\) are therefore \(\sigma_1 =
\sqrt{\lambda_1} = 5\), \(\sigma_2 = 0\). Eigenvectors of
\(\boldsymbol{A}^T \boldsymbol{A}\) are given by basis vectors of
null spaces of \(\boldsymbol{A}^T \boldsymbol{A}- \lambda
\boldsymbol{I}\)
\(\displaystyle \left[ \begin{array}{ll}
- 20 & 10\\
10 & -
5
\end{array} \right] \sim \left[ \begin{array}{ll}
- 20 & 10\\
0 & 0
\end{array} \right] \Rightarrow \boldsymbol{v}_1 =
\frac{1}{\sqrt{5}} \left[
\begin{array}{l}
1\\
2
\end{array}
\right]\)
\(\displaystyle \left[ \begin{array}{ll}
5 & 10\\
10 &
20
\end{array} \right] \sim \left[ \begin{array}{ll}
5 & 10\\
0 & 0
\end{array} \right] \Rightarrow \boldsymbol{v}_2 =
\frac{1}{\sqrt{5}} \left[
\begin{array}{l}
- 2\\
1
\end{array} \right]\)
Since \(\boldsymbol{A}^T
\boldsymbol{A}=\boldsymbol{A}\boldsymbol{A}^T\) the same
eigenvectors are obtained. The SVD
\(\boldsymbol{A}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{V}^T\)
is given by
\(\displaystyle \boldsymbol{U}=\boldsymbol{V}= \left[
\begin{array}{ll}
\boldsymbol{v}_1 & \boldsymbol{v}_2
\end{array}
\right], \boldsymbol{\Sigma}= \left[ \begin{array}{ll}
5 & 0\\
0
& 0
\end{array} \right] .\)