Track 1: 1,2,3,6. Track 2: 1-6.
-
Construct a convergence plot in logarithmic coordinates for the
following continued fraction approximation of
Identify the terms in the general expression of a continued fraction
Compare with the additive approximation from a McLaurin series
Estimate the rate and order of convergence for both approximations.
Solution. Rewrite (1) as
and introduce the continued fraction sequence ,
with assumed limit .
Writing out the first few terms
leads to identification of coefficients as
Define a function
to compute
and return .
∴ |
function f(n,dbg=false)
b=n+1; F=1.0*b; E=2+1/F
for k=n:-1:1
b=k; a=k; F = b + a/F; E=2+1/F
if dbg
@printf("k=%d b_k=%d a_k=%d F_k=%f E_k=%f\n",k,b,a,F,E)
end
end
return E
end; |
Test the function
k=5 b_k=5 a_k=5 F_k=5.833333 E_k=2.171429
k=4 b_k=4 a_k=4 F_k=4.685714 E_k=2.213415
k=3 b_k=3 a_k=3 F_k=3.640244 E_k=2.274707
k=2 b_k=2 a_k=2 F_k=2.549414 E_k=2.392247
k=1 b_k=1 a_k=1 F_k=1.392247 E_k=2.718263
Comparison with the builtin
shows 5-digit accuracy for .
Define a function
to compute
∴ |
function g(n)
M=1.0; fact=1.0
for k=1:n
fact=k*fact
M=M+1/fact
end
return M
end |
Test the function
|
(2) |
Note that only 3 accurate digits are obtained for .
The convergence behavior of the two approximations
is shown in Fig. 1.
 |
|
Figure 1. Convergence of
continued fraction and additive approximations of .
|
∴ |
N=15; n=1:N; e=exp(1.0); x=log.(n); |
∴ |
errE=log.(abs.(f.(n) .- e)); errM=log.(abs.(g.(n) .- e)); |
∴ |
clf(); plot(x,errE,"-o",x,errM,"-x"); xlabel("log(n)"); ylabel("log(err)"); title("e Sequence convergence"); grid("on"); plot([0,1],[-10,-11],"b-"); plot([0,1],[-10,-12],"g-"); legend(["errE","errM","1st","2nd"]); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig01.eps") |
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
The definition of order and rate
of convergence
is based upon the assumption of power-law decrease of the error
,
In log-coordinates this assumption leads to a straight line
representation
The representation in Fig. 1
does not allow direct identification of an order of convergence.
However a plot of is easily constructed (Fig. 2),
and shows ,
.
 |
|
Figure 2. Order-of-convergence
plot for approximations of .
Visual estimation indicates ,
linear sequence convergence.
|
∴ |
N=15; n=1:N; e=exp(1.0); x=log.(n); |
∴ |
errE=log.(abs.(f.(n) .- e)); errM=log.(abs.(g.(n) .- e)); |
∴ |
clf(); plot(errE[1:N-1],errE[2:N],"-o",errM[1:N-1],errM[2:N],"-x"); xlabel("log(e_n)"); ylabel("log(e_(n+1))"); title("e Sequence convergence"); grid("on"); plot([-25,-20],[-25,-20],"b-"); plot([-25,-20],[-25,-15],"g-"); legend(["errE","errM","1st","2nd"]); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig02.eps") |
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
-
Apply convergence acceleration to both the above approximations of
. Construct the convergence plot of
the accelerated sequences, and estimate the new rate and order of
convergence.
Solution. Since both sequences exhibit linear convergence the Aitken
formula
is applicable. The resulting accelerated convergence plot is shown
in Fig. 3. Convergence acceleration to second order is
observed for a small range of errors, . For smaller errors, the floating
point system cannot separate the small differences appearing in the
Aitken correction.
 |
|
Figure 3. Convergence of
Aitken acceleration of
approximation sequences
|
∴ |
function Aitken(x)
a=copy(x); N=length(x)
for n=3:N
Δx = x[n] - x[n-1]
Δ2x = x[n] - 2*x[n-1] + x[n-2]
a[n] = x[n] - Δx^2/Δ2x
end
return a
end; |
∴ |
N=6; n=1:N; E=f.(n); M=g.(n); aE=Aitken(E); aM=Aitken(M); |
∴ |
erraE=log.(abs.(aE .- e)); erraM=log.(abs.(aM .- e)); |
∴ |
clf(); plot(errE[1:N-1],errE[2:N],"-ob"); plot(errM[1:N-1],errM[2:N],"-xb"); plot(erraE[1:N-1],erraE[2:N],"-og"); plot(erraM[1:N-1],erraM[2:N],"-xg"); xlabel("log(e_n)"); ylabel("log(e_(n+1))"); title("e Sequence convergence"); grid("on"); plot([-10,-5],[-10,-5],"b-"); plot([-10,-5],[-10,0],"g-"); legend(["errE","errM","erraE","erraM","1st","2nd"]); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig03.eps") |
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
-
Completely state the mathematical problem of taking the
root of a positive real, .
Find the absolute and relative condition numbers.
Solution. First, assume
fixed leading to the problem ,
.
The absolute condition number is
The condition number furnishes a bound for the change in the
solution upon a change in the input
Using the absolute value norm for
gives
Consider some simple cases:
In general, the conditioning of the
root operation
indicates ill-conditioning for ,
well conditioning for ,
incorrect model for .
Consider now what happens when is
also allowed to vary. The definition of the condition number cannot
be directly applied since the limit process is not defined for . One
can however consider the problem
and notice that ,
is a restriction of . The condition
number of can be inferred from that
of
Use the -norm for ,
. When approaching zero
perturbation, above the
first bisector, the inequality
implies
Conversely, for below the
first bisector
In general, for some arbitrary path to approach zero,
When restricted to ,
 |
|
Figure 4. Conditioning of -root
with perturbations allowed in .
|
∴ |
κ(x,n)=max(1/n/x^((n-1)/n),x^(1/n)*log(x)/n^2); clf(); |
∴ |
x=0.001:0.2:11; κ2=log.(κ.(x,2)); κ3=log.(κ.(x,3)); |
∴ |
κ4=log.(κ.(x,4)); plot(x,κ2,x,κ3,x,κ4); grid("on"); |
∴ |
xlabel("x"); ylabel("ln κ"); title("Condition of nth root"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig04.eps") |
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
The PostScript backend does not support
transparency; partially transparent artists will be
rendered opaque.
-
Completely state the mathematical problem of solving the initial
value problem for an ordinary differential equation of first order.
Find the absolute and relative condition numbers.
Solution. The IVP
is formulated as the mathematical problem
that when evaluated for some slope function
and initial condition
gives the integral curve ,
.
. In the above
is the space of continuous
functions and
is the space of Lipschitz continuous functions.
As in the -root
problem there are two inputs to , and
it is useful to start with the case in which the slope function
is fixed and only
varies. The Lyapunov exponent is
defined as
to characterize this case and the condition number is simply
i.e., the condition number and the Lyapunov exponent express the
same concept.
The condition number for the case of variation of the slope function
requires a concept of taking a derivative with respect to a function
, a generalization of the calculus
concept of taking a derivative with respect to a variable. This is
known as a functional derivative and can be defined in the
Fréchet sense for normed spaces and in the Gateaux sense for
Banach spaces.
-
Completely state the mathematical problem of finding the roots of a
cubic polynomial. Find the absolute and relative condition numbers.
Solution. Consider the cubic
with roots
related to the polynomial coefficients by the Vieta relations
The mathematical problem of finding the roots of the cubic is
Consider the effect of small changes
upon the roots by taking differentials
This is a linear system for
with matrix
The condition number for is the
maximal amplification by the matrix
or
Consider some specific cases:
-
For ,
-
Numerically compare the approximation of
by linear combination of with that of linear combinations
of . Present a study of the
aproximation error as the number of terms in the linear combination
increases. Estimate the order of convergence in both cases.
Solution. The approximation is stated as
with the basis functions chosen either as
or .
The approximation error can be
measured in various norms, e.g., the 2-norm
Based upon the code from Fig.1 in
L04, define a function that returns the approximation error for
given basis set ,
number of terms , and evaluation
points .
∴ |
function err(m,n,a,dbg=false)
h=2.0*π/m; j=1:m; t=h*j;
A = a.(1,t)
for k=2:n
A = [A a.(k,t)]
end
if dbg
return A
end
bt=t.*(π.-t).*(2*π.-t)
x=A\bt; b=A*x;
return norm(b-bt)*(2*pi)/m
end |
Define the two basis sets of
interest
∴ |
s(k,t) = sin(k*t); e(k,t)=exp((k-1)*t); |
Verify the err function constructs
the expected matrix. Note that the basis function is passed as an
argument.
|
(10) |
|
(11) |
|
(12) |
|
(13) |
The numerical values within the matrix
are hard to interpret for large ,
hence plot the columns in Fig. 5. The basis fubnctino
plots already indicate that the exponential family is likely to lead
to bad approximations since with respect to ,
all
for
are negligibly small, hence is
likely to have only one independent column vector.
  |
|
Figure 5. Left: Sine basis functions.
Right: Exponential basis functions.
|
∴ |
m=256; n=5; As = err(m,n,s,true); Ae = err(m,n,e,true); |
∴ |
clf(); h=2.0*π/m; j=1:m; t=h*j; |
∴ |
for k=1:n
plot(t,As[:,k])
end |
∴ |
grid("on"); xlabel("t"); ylabel("sin(k*t)"); |
∴ |
title("Sine basis functions"); |
∴ |
cd(homedir()*"/courses/MATH661/images"); |
∴ |
savefig("H01Fig05a.eps"); |
∴ |
clf(); grid("on"); xlabel("t"); ylabel("exp(k*t)"); |
∴ |
title("Exp basis functions"); |
∴ |
for k=1:n
plot(t,Ae[:,k])
end |
∴ |
savefig("H01Fig05b.eps"); |
Test the err function for larger
values (more samples).
|
(14) |
The convergence behavior is shown in Fig. 6. As
expected as the number of terms in the linear combination increases
the sine approximation converges, while that for the exponential
basis has a constant error (numerically, the rank of the matrix
remains 1).
 |
|
Figure 6. Convergence of sine, exp basis
approximation of
with increasing number of terms.
|
∴ |
m=1000; n=10:10:100; xn = log.(1.0*n); |
∴ |
errs = err.(m,n,s); lgerrs = log.(errs); |
∴ |
erre = err.(m,n,e); lgerre = log.(erre); |
∴ |
clf(); plot(xn,lgerrs,"ok",xn,lgerre,"rx"); |
∴ |
xlabel("log(n)"); ylabel("log(err)"); grid("on") |
∴ |
title("Convergence behavior for sine, exp basis"); |
∴ |
legend(["sine","exp"]); |
∴ |
cd(homedir()*"/courses/MATH661/images"); |
∴ |
savefig("H01Fig06.eps"); |
The PostScript backend does not
support transparency; partially transparent
artists will be rendered opaque.
The PostScript backend does not
support transparency; partially transparent
artists will be rendered opaque.
|
|