MATH547: Linear algebra for applications in data scienceMay 29, 2020
Solve the following problems (5 course points each). Present a brief
motivation of your method of solution. Explicitly state any conditions
that must be met for solution procedure to be valid. Organize your
computation and writing so the solution you present is readily
legible. No credit is awarded for statement of the final answer to a
problem without presentation of solution procedure.
This is an open-book test, and you are free to consult the textbook or
use software to check your solution. Your submission must however
reflect your individual effort with no aid from any other person. If
you studied the course material and understood solutions to the
homework assignments, drafting examination question solutions in
TeXmacs should take about 2 hours. The allotted time is 3:10 hours,
thus also providing flexibility for internet connection interruption.
Draft your solution in TeXmacs in this file. Upload your answer into
Sakai both as a TeXmacs, and pdf. Allow at least 10 minutes before the
submission cut-off time to ensure you can upload your file.
Two matrices are said to be similar, denoted as if there exists some invertible matrix such that . Prove that matrix similarity is an equivalence relation.
holds with , .
. If , then such that . Multiply on left by , on right by to obtain , hence .
. If , then such that and Replace to obtain
with , hence .
Matrix similarity is verified to be an equivalence relation.
A matrix is said to be skew-symmetric if .
Are skew-symmetric matrices a vector subspace of the vector space of matrices ?
Compute when is skew-symmetric.
Verify closure for , with scalars, . Compute , verified as skew-symmetric, hence a subspace of .
Compute transpose . Since is a scalar, , hence .
Consider for , , the matrix .
Show that is orthogonal.
Can be chosen such that is a projector?
Compute
Since , obtain
And since , replacing in above gives , hence is orthogonal.
For to be a projector , must hold. Compute
implying or , contradicting hence cannot be a projector.
The matrix contains surveying data (https://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lsq/well1033.html), and its singular value decomposition is .
octave] |
cd /home/student/courses/MATH547ML/data/lsq; load well1033 |
octave] |
A=Expression1; |
octave] |
Construct a vector , with a unit vector of random numbers ().
Solve the least squares problem .
What is the error ?
Remember to briefly comment the solution you present. Presentation of Octave commands without explanation of what you are doing does not received full credit.
Compute SVD and generate random numbers
octave] |
[U,S,V]=svd(A); [m,n]=size(A); z=rand(m,1); z=z/norm(z); b=U*z; disp([norm(z) norm(b)]) |
1.0000 1.0000
octave] |
The solution is the projection of onto , . The projector can be computed from the -decomposition, as .
octave] |
[Q,R]=qr(A); y=Q*Q'*b; x=A\y; |
octave] |
The projector can also be computed from the SVD, with the first columns of , and .
octave] |
r=rank(A); u=U(:,1:r)*U(:,1:r)'*b; w=A\u; disp(norm(x-w)); |
1.2416e-14
octave] |
Compute error
octave] |
err=norm(A*x-b) |
err = 0.81930
octave] |
For the same data as in Problem 4:
Construct a reduced matrix , that contains the first singular modes of , such that , with orthogonal matrices.
Find the projection of from problem 4 onto the column space of , .
Solve the problem .
Compare to least squares solution from Problem 4.
Remember to briefly comment the solution you present. Presentation of Octave commands without explanation of what you are doing does not received full credit.
Extract the singular values, check condition to find .
octave] |
s=diag(S); k=310; s(k)/s(1) |
ans = 0.073165
octave] |
Define from singular vectors of
octave] |
C=U(:,1:k); B=V(:,1:k); R=C'*A*B; |
octave] |
Since has orthonormal columns, the projection is solution of , or
octave] |
v=C'*b; |
octave] |
Find solution
octave] |
u=R\v; |
octave] |
Compute the relative error
octave] |
norm(B*u-x)/norm(x) |
ans = 0.99360
octave] |
The above relative error is large. Define a function to carry above steps for some
octave] |
function [rerr,Aerr]=ReducedModel(k,A,x,b,s,U,V) C=U(:,1:k); B=V(:,1:k); R=C'*A*B; v=C'*b; u=R\v; rerr = norm(B*u-x)/norm(x); Aerr=norm(A-U(:,1:k)*diag(s(1:k))*V(:,1:k)'); end |
octave] |
[rerr,Aerr]=ReducedModel(310,A,x,b,s,U,V) |
rerr = 0.99360
Aerr = 0.099048
octave] |
[rerr,Aerr]=ReducedModel(315,A,x,b,s,U,V) |
rerr = 0.96736
Aerr = 0.028965
octave] |
[rerr,Aerr]=ReducedModel(319,A,x,b,s,U,V) |
rerr = 0.74059
Aerr = 0.010874
octave] |
[rerr,Aerr]=ReducedModel(320,A,x,b,s,U,V) |
rerr = 1.1217e-14
Aerr = 2.5645e-14
octave] |
From the above, there is no benefit from model reduction in this case.