Interpolation of data by an approximant corresponds to the minimization problem
in the discrete one-norm at the sample points
Different approximants are obtained upon changing the norm.
The argument underlying the above theorem is based upon constructing the closed and bounded subset of
Since is finite dimensional, is compact, and the continuous mapping attains is extrema.
The two main classes of approximants of real functions that arise are:
The vectors are constructed at sample points and the best approximant solves the problem
Note that the minimization is carried out over the members of the subset , not over the vectors . The norm can include information on derivatives as in the norm
arising in Hermite interpolation.
The norm is now expressed through an integral such as the -norms
In general, the best approximant in a normed space is not unique. However, the best approximant is unique in a Hilbert space, and is further characterized by orthogonality of the residual to the approximation subspace.
and the residual is orthogonal to ,
Note that orthogonality of the residual implies or that the best approximant is the projection of onto .
For Hilbert spaces with a norm is induced by the scalar product
finding the best approximant reduces to a problem within (or ). Introduce a basis for such that any has an expansion
Since is finite dimensional, say , an approximant has expansion
Note that the approximation may lie in an arbitrary finite-dimensional subspace of . Choosing the appropriate subset through the function is an interesting problem in itself, leading to the goal of selecting those basis functions that capture the largest components of , i.e., the solution of
Approximate solutions of the basis component selection are obtained by processes such as greedy approximation or clustering algorithms. The approach typically adopted is to exploit the Bessel inequality
and select
eliminate from , and search again. The -step is
with .
Assuming , the orthogonality relation leads to a linear system
If the basis is orthonormal, then , and the best approximant is simply given by the projection of onto the basis elements. Note that the scalar product need not be the Euclidean discrete or continuous versions
A weighting function may be present as in
discrete and continuous versions, respectively. In essense the appropriate measure for some specific problem
arises and might not be the Euclidean measure .
In the vector space of continuous functions defined on a topological space (e.g., a closed and bounded set in ), a norm can be defined by
and the best approximant is found by solving the problem
The fact that is the best approximant of can be restated as being the approximant of since
A key role is played by the points where leading to the definition of a critical set as
When , the space of polynomials of degree at most , with , the best approximant can be charaterized by the number of sign changes of .
if and only if there exist points such that
where .
Recall that choosing , the roots of the Chebyshev polynomial (with , , ), leads to the optimal error bound in polynomial interpolation
The error bound came about from consideration of the alternation of signs of at the extrema of the Chebyshev polynomial , , with monic polynomials. The Cebyshev alternation theorem generalizes this observation and allows the formulation of a general approach to finding the best inf-norm approximant known as the Remez algorithm. The idea is that rather than seeking to satisfy the interpolation conditions
in the monomial basis
attempt to find alternating-sign extrema points by considering the basis set
with .
Algorithm (Remez)
Initialize to Chebyshev maxima on interval
Solve , ,
Find the extrema of with
If are approximately equal in absolute value and of opposite signs, return
Otherwise set , repeat