|
Posted: 08/23/21
Due: 08/30/21, 11:55PM
Construct a convergence plot in logarithmic coordinates for the Leibniz series approximation of . Estimate the rate and order of convergence.
Solution. The convergence of the Leibniz series partial sum sequence
is shown in Fig. 1, and can be visually estimated to converge with order . The rate of convergence is
∴ |
N=200; n=20:20:N; |
∴ |
function Leibniz(n) k=0:n sum((-1).^k ./ (2*k.+1)) end; |
∴ |
err=log.(abs.(Leibniz.(n) .- π/4)); |
∴ |
clf(); plot(log.(n),err,"-o"); xlabel("log(n)"); ylabel("log(err)"); title("Leibniz series convergence"); grid("on"); plot([3,4],[-5,-7],"-"); plot([3,4],[-5,-6],"-"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig01.eps") |
∴ |
Apply Aitken acceleration to the Leibniz partial sums sequence. Construct the convergence plot. Estimate the rate and order of convergence.
Solution. The Aitken-accelerated sequence has general term
with convergence plot shown in Fig. 2, indicating faster than quadratic growth.
∴ |
N=40; n=1:N; L=Leibniz.(n); A=copy(L); |
∴ |
for n=3:N ΔL = L[n] - L[n-1] Δ2L = L[n] - 2*L[n-1] + L[n-2] A[n] = L[n] - ΔL^2/Δ2L end |
∴ |
errL=log.(abs.(L .- π/4)); errA=log.(abs.(A .- π/4)); |
∴ |
clf(); plot(log.(n),errL,"-o"); plot(log.(n),errA,"-x"); xlabel("log(n)"); ylabel("log(err)"); title("Aitken-accelerated Leibniz series convergence"); grid("on"); plot([2,4],[-6,-8],"-"); plot([2,4],[-6,-10],"-"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig02.eps") |
∴ |
Construct a convergence plot for the Ramanujan series approximation of . Estimate the rate and order of convergence.
Solution. The general term
in the Ramanujuan series
decreases rapidly. Applying Stirling's formula, (valid for large ), gives
Since , terms smaller than ( is machine epsilon) would not appreaciably contribute to the sum
∴ |
ε=eps(Float64); kmax = log(10^3*ε)/(-4*log(99)) |
∴ |
ř(k)=99.0^(-4*k)*(1103+26390*k) |
ř
∴ |
ř.(0:4) |
(1)
∴ |
The above computation indicates that 2 terms of the sum should already give an approximation within machine precision. Floating point evaluation of should avoid intermediate computations close to overflow or undeflow. For example, the following evaluation is accurate in floating point arithmetic
∴ |
r(k) = (factorial(4*k)/(factorial(k))^4)*(1103.0+26390*k)/396.0^(4*k); |
∴ |
r.(0:4) |
(2)
∴ |
In contrast, a different order of operations leads to incorrect results due to close-to-overflow conditions.
∴ |
rbad(k)=factorial(4*k)*(1103+26390*k)/(factorial(k))^4/396^(4*k); |
∴ |
rbad.(0:4) |
(3)
∴ |
Since there are only three partial sums that can be formed in floating point arithmetic, it is improper in this case to define an order of convergence based on the concept of a limit.
∴ |
R(n) = 2*sqrt(2.)/9801*sum(r.(0:n)); |
∴ |
R.(0:2) |
(4)
∴ |
(R.(0:2) .- 1/π)*π |
(5)
∴ |
eps(Float64) |
∴ |
From above computations, machine precision is achieved at (recall that machine epsilon is indistinguishable from zero, ).
Construct a convergence plot for the continued fraction approximation of . Estimate the rate and order of convergence.
Solution. The continued fraction
can be evaluated through multiple techniques:
Iteration. The sequence of fractions is explicitly unrolled into a loop.
∴ |
function πIter(n) a(k) = (2*k-1)^2; b(k) = 6.0 F = b(n) for k = n:-1:1 F = b(k) + a(k)/F end return F end; |
∴ |
πIter.(0:5) |
(6)
∴ |
Recursion. The sequence of fractions is evaluated by a self-referential function. Note in the implementation the use of multi-method functions:
πRecur(k,n) is the recursive function with two arguments
πRecur(n) is a single-argument function that initializes the recursion
∴ |
function πRecur(k,n) a(k) = (2*k-1)^2; b(k) = 6.0 if (k==n) return b(n) else return b(k+1) + a(k+1)/πRecur(k+1,n) end end; |
∴ |
πRecur(n) = πRecur(0,n); |
∴ |
πRecur.(0:5) |
(7)
∴ |
Composition. The continued fraction is evaluated through composition of functions
∴ |
function t(k,z) a(k) = (2*k-1)^2; b(k) = 6.0 if (k==0) return b(0) + z else return a(k)/(b(k)+z) end end; |
∴ |
function T(n,z) if (n==0) return t(0,z) else return T(n-1,t(n,z)) end end; |
∴ |
πComp(n) = T(n,0); |
∴ |
n=5; [πIter(n) πRecur(n) πComp(n)] |
(8)
∴ |
From convergence study in Fig. 3, the continued fraction order of approximation is . This is often encountered: continued fraction approximations that lead to higher than second-order convergence.
∴ |
N=200; n=20:20:N; |
∴ |
err=log.(abs.(πIter.(n) .- (π+3))); |
∴ |
clf(); plot(log.(n),err,"-o"); xlabel("log(n)"); ylabel("log(err)"); title("Continued fraction convergence"); grid("on"); plot([3,4],[-12,-13],"-"); plot([3,4],[-12,-14],"-"); plot([3,4],[-12,-15],"-"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/H01Fig03.eps") |
∴ |
Completely state the mathematical problem of computing the absolute value
Find the absolute and relative condition numbers.
Solution. The mathematical problem is
From the overview of mathematical problem conditioning [1], the absolute condition number introduced in [2] is given by
where is differentiable. The relative condition number is also equal to one
The only remaining point is behavior at . By definition
The relative condition number is
Taking the absolute value is a well-conditioned mathematical problem.
Completely state the mathematical problem of computing a difference.
Find the absolute and relative condition numbers.
Solution. The mathematical problem is
with absolute condition number
Choosing the 1-norm,
The relative condition number is
Note that the relative condition number becomes arbitrarily large close to the bisector. Interpret the above to signify:
When , and an absolute condition number of indicates that errors in the input are not amplified;
However, relative to the expected result , relative errors are not bounded