Lecture 20: Quadrature
1.Numerical integration
Numerical integration proceeds along the same lines as numerical
differentiation
with a different operator
with with a set of measure zero, e.g.,
for a function continuous at all points in . It is often useful to explicitly identify a
weight function that can attribute higher significance
to subsets of . In the integration case, the approximation
basis choice can be combined with decomposition of the domain into subintervals such that through
with
Monomial basis.
As for numerical differentiation, an integration operator
can be obtained from the polynomial interpolant of
based upon data ,
Integration of for
is approximated as
As for numerical differentation, the computational effort in the above
formulation can be reduced through alternative basis choices.
Lagrange basis.
Assume
in the data set , and construct an interpolant of
degree in each subinterval . As highlighted by the approximation
of the Runge function, the degree
should be small, typically .
-
Trapezoid formula
-
For ,
a linear approximant is defined over each subinterval , stated in Lagrange form as
The resulting integral approximation
The approximation error results from the known polynomial
interpolation error formula
that gives
Using an equidistant partition ,
, over the
entire interval gives the composite trapezoid
formula
The overall approximation error is bounded by the error over each
subinterval
and the trapezoid formula exhibits quadratic convergence, .
-
Simpson formula
-
A more accurate, quadratic approximant is obtained for using
sample points ,
Assuming ,
,
, over a
subinterval the Simpson approximation is
Integration of the interpolation error
leads to calculation of
|
(1) |
This null result is a feature of even-degree interpolants .
Note that the interpolation error formula contains an evaluation
of at some point that depends on ,
so the integral
is not necessarily equal to zero. To obtain an error estimate,
rewrite the interpolating polynomial in Newton form
The next higher degree interpolant would be
and (1) implies that the integral of
is equal to that of
hence the Simpson formula based on a quadratic interpolation is as
accurate as that based on a cubic interpolation. The error can now
be estimated using
where is
some additional interpolation point within . It is convenient to choose ,
which corresponds to a Hermite interpolation condition at the
midpoint. This is simply for the purpose of obtaining an error
estimate, and does not affect the Simpson estimate of the
integral. Carrying out the calculations gives
When applied to the overall interval , the Simpson formula is stated as
with error
The composite Simpson formula is
with an overall error bound
that shows fourth-order convergence .
Moment method.
As in numerical differentiation, an alternative derivation is
available. Numerical integration is stated as a quadrature formula
using data . Once the sampling points are chosen,
there remain
weights to be
determined. One approach is to enforce exact quadrature for the first
monomials
A Vandermonde system results whose solution furnishes the appropriate
weights. Since the Vandermonde matrix is ill-conditioned, a solution is
sought in exact arithmetic, using rational numbers as opposed to
floating point approximations.
The principal utility of the moment method is construction of quadrature
formulas for singular integrands. For example, in the integral
the
is an integrable singularity, and accurate quadrature rules can be
constructed by the method of moments for
with .
2.Gauss quadrature
Recall that the method of moments approach to numerical integration
based upon sampling ,
imposes exact results for a finite number of members of a basis set
The trapezoid, Simpson formulas arise from the monomial basis set , in which case
but any basis set can be chosen. Instead of prescribing the sampling
points a
priori, which typically leads to an error ,
the sampling points can be chosen to minimize the error .
For the monomial basis this leads to a system of
equations
for the unknown
quadrature weights
and the
sampling points .
The system is nonlinear, but can be solved in an insightful manner
exploiting the properties of orthogonal polynomials known as Gauss
quadrature.
The basic idea is to consider a Hilbert function space with the scalar
product
and orthonormal basis set ,
Assume that are
polynomials of degree . A polynomial
of degree
can be factored as
where is the
quotient polynomial of degree , and is the remainder
polynomial of degree . The weighted
integral of
is therefore
Since is an orthonormal set, , and the integral
becomes
The integral of the
remainder polynomial can be exactly evaluated through an
point quadrature
that however evaluates rather than the original integrand
. However,
evaluation of the factorization (2) at the roots of ,
,
,
gives
stating that the values of the remainder at these nodes are the same as
those of the
polynomial. This implies that
is an exact quadrature of order ,
. The weights
can be determined through any of the previously outlined methods, e.g.,
method of moments
which is now a linear system that can be readily solved. Alternatively,
the weights are also directly given as integrals of the Lagrange
polynomials based upon the nodes that are roots of