Exercise
Exercise
Can you estimate the rate and order of convergence?
Solution. Since , note that the distance to the limit is simply for . For large , the convergence behavior is given by , leading to upon taking decimal logarithms. In practical computation, is chosen between some upper bound dictated by allowed computational effort and a lower bound needed to start observing convergence
The computed terms allow estimation of by linear regression (least squares fitting) expressed as
The Julia instruction carries out the linear regression (also a feature of Matlab/Octave).
∴ |
M=20; N=100; n=M:N; a=1 ./ log.(n); d=a; lgd=log10.(d); |
∴ |
m=N-M; y=lgd[2:m+1]; A=ones(m,2); A[:,2]=lgd[1:m]; |
∴ |
x=A\y; s=10^x[1]; q=x[2]; [s q] |
(1)
∴ |
From the above, obtain order of convergence (sublinear convergence), and rate of convergence . The convergence plot highlights this sublinear convergence behavior.
∴ |
x=n; y=d; clf(); semilogy(x,y,"."); grid("on"); |
∴ |
xlabel(L"$n$"); ylabel(L"$d_n$"); title("Convergence plot"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/E02Fig01.eps"); |
∴ |
Exercise
Can you estimate the rate and order of convergence?
Solution. Again, , hence for . Repeat the above computation for the new sequence choosing lower values for since is expected to converge to zero faster than .
∴ |
M=2; N=20; n=M:N; a=exp.(-n); d=a; lgd=log10.(d); |
∴ |
m=N-M; y=lgd[2:m+1]; A=ones(m,2); A[:,2]=lgd[1:m]; |
∴ |
x=A\y; s=10^x[1]; q=x[2]; [s q] |
(2)
Linear convergence is obtained, , and the rate of convergence . The order of convergence is only slightly better than the previous sequence and faster convergence arise from the marked difference in rate of convergence.
∴ |
x=n; y=d; clf(); semilogy(x,y,"."); grid("on"); |
∴ |
xlabel(L"$n$"); ylabel(L"$d_n$"); title("Convergence plot"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/E02Fig02.eps"); |
∴ |
Exercise
Exercise
Solution. Proceed as above with , for . Repeat the above computations, changing notation for the new problem.
∴ |
M=20; N=100; n=M:N; x=1 ./ n; d=x; lgd=log10.(d); |
∴ |
m=N-M; y=lgd[2:m+1]; A=ones(m,2); A[:,2]=lgd[1:m]; |
∴ |
b=A\y; s=10^b[1]; q=b[2]; [s q] |
(3)
Slightly sublinear convergence is obtained, , and the rate of convergence .
∴ |
x=n; y=d; clf(); semilogy(x,y,"."); grid("on"); |
∴ |
xlabel(L"$n$"); ylabel(L"$d_n$"); title("Convergence plot"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/E02Fig03.eps"); |
∴ |
Exercise
Solution. Assuming that linearly, Aitken acceleration considers two successive steps
(4) |
The above states that from data , i.e., three successive sequence terms, an estimate of the limit as is obtained. Since this predicts the future, the estimate is called an extrapolation. Good extrapolations are notoriously hard to obtain, so (4) should not be assumed to be a precise estimate of the limit, but rather the object of further numerical experimentation and mathematical analysis. The approximation (4) can be rewritten as
The difficulties that arise in extrapolation can be highlighted by computation of the condition number of the problem
The appearance of a difference of nearly equal quantities in the denominator hints at ill-conditioning, and since is differentiable the condition number can be defined as the norm of the gradient . Consider one of the gradient components
with shifted indices
and, in the spirit of experimentation, evaluate this component for the sequence
∴ |
M=20; N=100; n=M:N; M1=M+1; N1=N-1; x=1 ./ n; |
∴ |
b[M1:N1]=x[M1:N1]-x[M1-1:N1-1]; c[M1:N1]=x[M1+1:N1+1]-x[M1:N1]; |
BoundsError([0.05, 0.047619047619047616,
0.045454545454545456, 0.043478260869565216,
0.041666666666666664, 0.04, 0.038461538461538464,
0.037037037037037035, 0.03571428571428571,
0.034482758620689655, 0.03333333333333333,
0.03225806451612903, 0.03125, 0.030303030303030304,
0.029411764705882353, 0.02857142857142857,
0.027777777777777776, 0.02702702702702703,
0.02631578947368421, 0.02564102564102564, 0.025,
0.024390243902439025, 0.023809523809523808,
0.023255813953488372, 0.022727272727272728,
0.022222222222222223, 0.021739130434782608,
0.02127659574468085, 0.020833333333333332,
0.02040816326530612, 0.02, 0.0196078431372549,
0.019230769230769232, 0.018867924528301886,
0.018518518518518517, 0.01818181818181818,
0.017857142857142856, 0.017543859649122806,
0.017241379310344827, 0.01694915254237288,
0.016666666666666666, 0.01639344262295082,
0.016129032258064516, 0.015873015873015872, 0.015625,
0.015384615384615385, 0.015151515151515152,
0.014925373134328358, 0.014705882352941176,
0.014492753623188406, 0.014285714285714285,
0.014084507042253521, 0.013888888888888888,
0.0136986301369863, 0.013513513513513514,
0.013333333333333334, 0.013157894736842105,
0.012987012987012988, 0.01282051282051282,
0.012658227848101266, 0.0125, 0.012345679012345678,
0.012195121951219513, 0.012048192771084338,
0.011904761904761904, 0.011764705882352941,
0.011627906976744186, 0.011494252873563218,
0.011363636363636364, 0.011235955056179775,
0.011111111111111112, 0.01098901098901099,
0.010869565217391304, 0.010752688172043012,
0.010638297872340425, 0.010526315789473684,
0.010416666666666666, 0.010309278350515464,
0.01020408163265306, 0.010101010101010102, 0.01],
(21:99,))
∴ |
d=a; lgd=log10.(d); |
∴ |
m=N-M; y=lgd[2:m+1]; A=ones(m,2); A[:,2]=lgd[1:m]; |
∴ |
x=A\y; s=10^x[1]; q=x[2]; [s q] |
(5)
∴ |
Slightly sublinear convergence is obtained, , and the rate of convergence .
∴ |
x=n; y=d; clf(); semilogy(x,y,"."); grid("on"); |
∴ |
xlabel(L"$n$"); ylabel(L"$d_n$"); title("Convergence plot"); |
∴ |
savefig(homedir()*"/courses/MATH661/images/E02Fig03.eps"); |
∴ |
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise