A vector space has been introduced as a 4-tuple with specific behavior of the vector addition and scaling operations. Arithmetic operations between scalars were implicitly assumed to be similar to those of the real numbers, but also must be specified to obtain a complete definition of a vector space. Algebra defines various structures that specify the behavior operations with objects. Knowledge of these structures is useful not only in linear algebra, but also in other mathematical approaches to data analysis such as topology or geometry.
Groups.
A group is a 2-tuple
containing a set
and an operation with properties from
Table 2. If ,
,
the group is said to be commutative. Besides the familiar example of
integers under addition , symmetry groups that specify spatial or
functional relations are of particular interest. The rotations by
or vertices of a square form a group.
|
||||||||||
Rings.
A ring is a 3-tuple
containing a set
and two operations
with properties from Table 2. As is often the case, a
ring is more complex structure built up from simpler algebraic
structures. With respect to addition a ring has the properties of a
commutative group. Only associativity and existence of an identity
element is imposed for multiplication. Matrix addition and
multiplication has the structure of ring
|
||||||||||||||||||
Fields.
A ring is a 3-tuple
containing a set
and two operations ,
each with properties of a commutative group, but with special
behavior for the inverse of the null element. The multiplicative
inverse is denoted as .
Scalars in the definition of a vector
space must satisfy the properties of a field. Since the operations
are often understood from context a field might be referred to as
the full ,
or, more concisely just through the set of elements as in the
definition of a vector space.
|
||||||||||||||
Using the above definitions, a vector space can be described as a commutative group combined with a field that satisfies the scaling properties , , , , , for , .
A central interest in data science is to seek simple description of complex objects. A typical situation is that many instances of some object of interest are initially given as an -tuple with large . Assuming that addition and scaling of such objects can cogently be defined, a vector space is obtained, say over the field of reals with an Euclidean distance, . Examples include for instance recordings of medical data (electroencephalograms, electrocardiograms), sound recordings, or images, for which can easily reach in to the millions. A natural question to ask is whether all the real numbers are actually needed to describe the observed objects, or perhaps there is some intrinsic description that requires a much smaller number of descriptive parameters, that still preserves the useful idea of linear combination. The mathematical transcription of this idea is a vector subspace.
The above states a vector subspace must be closed under linear combination, and have the same vector addition and scaling operations as the enclosing vector space. The simplest vector subspace of a vector space is the null subspace that only contains the null element, . In fact any subspace must contain the null element , or otherwise closure would not be verified for the particular linear combination . If , then is said to be a proper subspace of , denoted by .
Setting components equal to zero in the real space defines a proper subspace whose elements can be placed into a one-to-one correspondence with the vectors within . For example, setting component of equal to zero gives that while not a member of , is in a one-to-one relation with . Dropping the last component of , gives vector , but this is no longer a one-to-one correspondence since for some given , the last component could take any value.
octave] |
m=3; x=[1; 2; 0]; xp=x(1:2); disp(xp) |
1
2
octave] |
y=[1; 2; 3]; yp=y(1:2); disp(yp) |
1
2
octave] |
Vector subspaces arise in decomposition of a vector space. The converse, composition of vector spaces is also defined in terms of linear combination. A vector can be obtained as the linear combination
but also as
for some arbitrary . In the first case, is obtained as a unique linear combination of a vector from the set with a vector from . In the second case, there is an infinity of linear combinations of a vector from with another from to the vector . This is captured by a pair of definitions to describe vector space composition.
Since the same scalar field, vector addition, and scaling is used , it is more convenient to refer to vector space sums simply by the sum of the vector sets , or , instead of specifying the full tuplet for each space. This shall be adopted henceforth to simplify the notation.
octave] |
u=[1; 0; 0]; v=[0; 2; 3]; vp=[0; 1; 3]; w=[1; 1; 0]; disp([u+v vp+w]) |
1 1
2 2
3 3
octave] |
In the previous example, the essential difference between the two ways to express is that , but , and in general if the zero vector is the only common element of two vector spaces then the sum of the vector spaces becomes a direct sum.In practice, the most important procedure to construct direct sums or check when an intersection of two vector subspaces reduces to the zero vector is through an inner product.
octave] |
disp([u'*v vp'*w]) |
0 1
octave] |
The above concept of orthogonality can be extended to other vector subspaces, such as spaces of functions. It can also be extended to other choices of an inner product, in which case the term conjugate vector spaces is sometimes used.
The concepts of sum and direct sum of vector spaces used linear combinations of the form . This notion can be extended to arbitrary linear combinations.
Note that for real vector spaces a member of the span of the vectors is the vector obtained from the matrix vector multiplication
From the above, the span is a subset of the co-domain of the linear mapping .
The wide-ranging utility of linear algebra essentially results a complete characterization of the behavior of a linear mapping between vector spaces , . For some given linear mapping the questions that arise are:
Can any vector within be obtained by evaluation of ?
Is there a single way that a vector within can be obtained by evaluation of ?
Linear mappings between real vector spaces , have been seen to be completely specified by a matrix . It is common to frame the above questions about the behavior of the linear mapping through sets associated with the matrix . To frame an answer to the first question, a set of reachable vectors is first defined.
By definition, the column space is included in the co-domain of the function , , and is readily seen to be a vector subspace of . The question that arises is whether the column space is the entire co-domain that would signify that any vector can be reached by linear combination. If this is not the case then the column space would be a proper subset, , and the question is to determine what part of the co-domain cannot be reached by linear combination of columns of . Consider the orthogonal complement of defined as the set vectors orthogonal to all of the column vectors of , expressed through inner products as
This can be expressed more concisely through the transpose operation
and leads to the definition of a set of vectors for which
Note that the left null space is also a vector subspace of the co-domain of , . The above definitions suggest that both the matrix and its transpose play a role in characterizing the behavior of the linear mapping , so analagous sets are define for the transpose .
Examples. Consider a linear mapping between real spaces , defined by , with .
For ,
,
the column space is the -axis,
and the left null space
is the -plane.
Vectors that span these spaces are returned by the Octave orth and null functions.
-1
-0
-0
––-
0 0
1 0
0 1
octave]
A=[1; 0; 0]; disp(orth(A));
disp('-----'); disp(null(A'))
octave]
For , ,
the columns of are colinear, , and the column space is the -axis, and the left null space is the -plane, as before.
octave] |
A=[1 -1; 0 0; 0 0]; disp(orth(A)); disp('-----'); disp(null(A')) |
-1.00000
-0.00000
-0.00000
––-
0 0
1 0
0 1
octave] |
For , ,
the column space is the -plane, and the left null space is the -axis.
octave] |
A=[1 0; 0 1; 0 0]; disp(orth(A)); disp('-----'); disp(null(A')) |
-1 -0
-0 -1
-0 -0
––-
0
0
1
octave] |
For , ,
the same , are obtained, albeit with a different set of spanning vectors returned by orth.
octave] |
A=[1 1; 1 -1; 0 0]; disp(orth(A)); disp('-----'); disp(null(A')) |
0.70711 0.70711
0.70711 -0.70711
-0.00000 -0.00000
––-
0
0
1
octave] |
For , ,
since , the orthogonality condition is satisfied by vectors of form , .
octave] |
A=[1 1 3; 1 -1 -1; 1 1 3]; disp(orth(A)); disp('-----'); disp(null(A')) |
0.69157 0.14741
-0.20847 0.97803
0.69157 0.14741
––-
0.70711
0.00000
-0.70711
octave] |
The above low dimensional examples are useful to gain initial insight into the significance of the spaces . Further appreciation can be gained by applying the same concepts to processing of images. A gray-scale image of size by pixels can be represented as a vector with components, . Even for a small image with pixels along each direction, the vector would have components. An image can be specified as a linear combination of the columns of the identity matrix
with the gray-level intensity in pixel . Similar to the inclined plane example from §1, an alternative description as a linear combination of another set of vectors might be more relevant. One choice of greater utility for image processing mimics the behavior of the set that extends the second example in §1, would be for