In general, mathematical problems can be thought of as mappings from some set of inputs to some set of outputs . The mapping is often carried out through a function , i.e., a procedure that associates a single to some input
Examples:
Compute the square of a real:
∴ |
f(x)=x^2 |
f
∴ |
f(2) |
∴ |
Find solution of for given , . The inputs to this problem are and the output is the solution
∴ |
function f(a,b,c) if (a!=0) return (c-b)/a end end |
f
∴ |
f(1,2,3) |
∴ |
Compute the innner product of two vectors :
with the components of . Note that the input set is the Cartesian product of sets of vectors and the output set is the reals. Such functions defined from sets of vectors (more accurately vector spaces) to reals (more accurately scalars) are called functionals.
∴ |
function f(u,v) sum(u.*v) end |
f
∴ |
f([1 2 3],[1 2 3]) |
Compute the definite integral
with arbitrary continuous functions, denoted by :
Again, this an example of a functional.
The functional is now defined for function arguments, and evaluation can be carried out either symbolically or numerically. Symbolic evaluation in the public domain Maxima system for over the interval demonstrates orthogonality of the trigonometric functions .
(%i1) |
scprod(f,g,x,a,b):=integrate(f*g,x,a,b) |
(%i5) |
scprod(sin(x),sin(x),x,-%pi,%pi) |
(%i30) |
s:makelist(sin(k*x),k,1,3) |
(%i31) |
c:makelist(cos(k*x),k,0,3) |
(%i32) |
sc:join(c,s) |
(%i28) |
trigsp(f,g):=integrate(f*g,x,-%pi,%pi) |
(%i34) |
tsp:outermap(trigsp,sc,sc) |
Compute the derivative of a function , with the space of functions defined on differentiable times: , , . Note that in this case are sets of functions, in which case is referred to as an operator.
As above, the operator can be evaluated symbolically, and is predefined in all symbolic computation packages including Maxima
(%i43) |
diff(sin(x),x) |
(%o43) cos(x)
(%i44) |
diff(sin(cos(x))+cos(sin(x)),x) |
(%o44) (-cos(x)*sin(sin(x)))-sin(x)*cos(cos(x))
(%i45) |
Find the roots of a polynomial . The input is the polynomial specified by the vector of coefficients . The output is another vector whose components are roots,
The function cannot be written explicitly (corollary of Abel-Ruffini theorem), but there are approximations of the root-finding function that can be implemented such .
Most software systems include facilities for polynomials, including root-finding.
∴ |
import Pkg; Pkg.add("Polynomials"); |
∴ |
using Polynomials |
∴ |
p=Polynomial([-6,11,-6,1]) |
-6 + 11*x - 6*x^2 + x^3
∴ |
roots(p) |
(1)
∴ |
Note that the specification of a mathematical problem requires definition of the triplet .
Once a problem is specified, the natural question is to ascertain whether a solution is possible. Generally, simple affirmation of the existence of a solution is the objective of some field of mathematics (e.g., analysis, functional analysis). From the point of view of science, an essential question is not only existence but also:
how does the output change if changes?
what are the constructive methods to approximate ?
The above general definition of a mathematical problem must be refined in order to assess magnitude of changes in inputs or outputs. A first step is to introduce some structure in the input and output sets . Using these sets, vector spaces are constructed, consisting of a set of vectors , a set of scalars , an addition operation , and a scaling operation . The vector space is often referred to simply by its set of vectors , when the set of scalars, addition operation, and scaling operation are self-evident in context.
Formally,a vector space is
defined by a set whose elements satisfy
certain scaling and addition properties, denoted all together by the
4-tuple . The first element the 4-tuple is a
set whose elements are called vectors. The second element is
a set of scalars, and the third is the vector addition operation. The
last is the scaling operation, seen as multiplication of a vector by
a scalar. The vector addition and scaling operations must satisfy
rules suggested by positions or forces in three-dimensional space,
which are listed in Table 1. In particular, a vector
space requires definition of two distinguished elements: the zero
vector ,
and the identity scalar element .
Addition rules for
Closure
Associativity
Commutativity
Zero vector
Additive inverse
Scaling rules for
,
Closure
Distributivity
Distributivity
Composition
Scalar identity
A first step is quantification of the changes in input or output, assumed to have the structure of a vector space, .
if and only if x=0.
The ratio of changes in output to changes in input is the absolute condition number of a problem.
To avoid influence of choice of reference unit, the relative condition number is also introduced.
In scientific computation, the mathematical problem is approximated by an algorithm , in which is assumed to be computable, and are vector spaces that approximate . As a first step in characterizing how well the algorithm approximates the problem , consider that and , i.e., there is no error in representation of the domain and codomain.
The above condition is also denoted as
Algorithms should not catastrophically increase input errors. This is quantified in the concept of stability.
The above states that the relative error in the output should be on the order of machine epsilon if the relative in the input is of order machine epsilon. Note that the constants in the order statements are usually different from one another, ,
Backward stability asserts that the result of the algorithm on exact input data is the same as the solution to the mathematical problem for nearby data (with distance on order of machine epsilon).
Mathematical problems are stated as functions from a set of inputs to a set of outputs ,
The difficulty of a mathematical problem is assessed by measuring the effect of changes in input
To quantify changes in inputs and outputs, the framework of a normed vector space is introduced
The ratio of norm of output change to norm of input change is the absolute condition number of a problem
Algorithms are constructive approximations of mathemtical problems . The accuracy of an algorithm is assessed by comparison of the algorithm output to that of the mathematical problem through absolute error and relative error
The tendency of an algorithm to amplify pertubations of input is assessed by the concept of stability
Algorithms that do not amplify relative changes in input of the size of machine precision are forward stable.
Algorithms that compute the exact result of a mathematical problem for changes in put of the size of machine precision are backward stable.