A central interest in data science is to seek simple description of complex objects. A typical situation is that many instances of some object of interest are initially given as an -tuple with large . Assuming that addition and scaling of such objects can be cogently defined, a vector space is obtained, say over the field of reals with an Euclidean distance, . Examples include for instance recordings of medical data (electroencephalograms, electrocardiograms), sound recordings, or images, for which can easily reach in to the millions. A natural question to ask is whether all the real numbers are actually needed to describe the observed objects. Perhaps instead of specifying all the components of , it might be possible to state that is a linear combination of vectors
It is then natural to formally define the set of all vectors that could thus be expressed, meaning that they can be reached by linear combination of the columns of .
What now is the relationship between this set of reachable vectors and the entire vector space? The mathematical transcription of this question leads to a consideration of another algebraic structure.
The above states a vector subspace must be closed under linear combination, and have the same vector addition and scaling operations as the enclosing vector space. The simplest vector subspace of a vector space is the null subspace that only contains the null element, . In fact, any subspace must contain the null element , or otherwise closure would not be verified for the particular linear combination . One can think of as the smallest subspace of a vector space. By the above definition, is also a subspace of itself, intuitively, the largest subspace. If , then is said to be a proper subspace of , denoted by .
Setting components equal to zero in the real space defines a proper subspace whose elements can be placed into a one-to-one correspondence with the vectors within . For example, setting component of equal to zero gives that, while not a member of , is in a one-to-one relation with . Dropping the last component of , gives vector , but this is no longer a one-to-one correspondence since for some given , the last component could take any value.
∴ |
m=3; x=[1; 2; 0]; xp=x[1:2] |
(1)
∴ |
y=[1; 2; 3]; yp=y[1:2] |
(2)
∴ |
[xp==yp x==y] |
(3)
∴ |
Vector subspaces arise from the decomposition of a vector space, the idea of breaking up a complex object into component parts. The converse, composition of vector spaces is also defined in terms of linear combination. A vector can be obtained as the linear combination
but also as
for some arbitrary . In the first case, is obtained as a unique linear combination of a vector from the set with a vector from . In the second case, there is an infinity of linear combinations of a vector from with another from to the vector . This is captured by a pair of definitions to describe the two types of vector space composition.
Since the same scalar field, vector addition, and scaling is used, it is convenient to refer to vector space sums simply by the sum of the vector sets , or , instead of specifying the full 4-tuplet for each space. This shall be adopted henceforth to simplify the notation.
∴ |
u=[1; 0; 0]; v=[0; 2; 3]; vp=[0; 1; 3]; w=[1; 1; 0]; [u+v vp+w] |
(4)
∴ |
In the above computational example, the essential difference between the two ways to express is that , but , and in general if the zero vector is the only common element of two vector spaces then the sum of the vector spaces becomes a direct sum. In general, the common elements of two vector subspaces can also be defined.
In practice, the most important procedure to construct direct sums or to check when an intersection of two vector subspaces reduces to the zero vector is through an inner product.
Continuing the above computational example where the same vector was obtained through two different linear combinations, , the essential difference between the two is is orthogonal to , whereas is not orthogonal to .
∴ |
[u'*v vp'*w] |
(5)
∴ |
The wide-ranging utility of linear algebra essentially results a complete characterization of the behavior of a linear mapping between vector spaces , . For some given linear mapping the questions that arise are:
Can any vector within be obtained by evaluation of ?
Is there a single way that a vector within can be obtained by evaluation of ?
Linear mappings between real vector spaces , have been seen to be completely specified by a matrix . It is common to frame the above questions about the behavior of the linear mapping through sets associated with the matrix . To frame an answer to the first question, a set of reachable vectors is first defined.
By definition, the column space is included in the co-domain of the function , , and is readily seen to be a vector subspace of . Having defined the set of vectors reachable by linear combination, two questions arise:
Is the column space the entire co-domain, ? This would signify that any vector can be reached by linear combination of columns of .
What co-domain vectors are not reachable by linear combination of columns of ?
Consider the orthogonal complement of defined as the set vectors orthogonal to all of the column vectors of , expressed through inner products as
This can be expressed more concisely through the transpose operation
and leads to the definition of a set of vectors for which
Note that the left null space is also a vector subspace of the co-domain of , . The above definitions suggest that both the matrix and its transpose play a role in characterizing the behavior of the linear mapping , so analagous sets are define for the transpose .
The concepts of Euclidean geometry are widely used to characterize subspaces of a vector space. Consider the familiar example of the Euclidean 2-space or plane,
As is common the vector space representing the plane is referred to either by its full name or in shorthand form as . The trivial subspaces of are the zero vector space , and itself. Any line passing through the origin is a non-trivial subspace with
In particular the axes are
respectively. In , the non-trivial subspaces are lines passing through the origin
and planes passing through the origin
where is the normal vector of the plane. An intuitive understanding of subspace geometry is essential and built up from instructive computational examples.
Examples. Consider a linear mapping between real spaces , defined by , with . Julia provides the nullspace function to return a set of vectors that span a null space. A function colspace to provide a set of vectors to span the column space is not yet in the general libraries, but can be readily defined, together with a function to display numerical results to a default precision of digits.
∴ |
function colspace(A,p=6) return round.(Matrix(qr(A).Q)[:,1:rank(A)],digits=p) end; |
∴ |
short(x) = round(x,digits=6); |
∴ |
short(pi) |
∴ |
With these functions defined, the following examples provide great insight into the significance of column and null spaces and their associated spanning sets. For these small-dimensional, simple examples geometric insight is sufficient to understand what column and null spaces represent. Computational procedures can be devised for much higher number of components, and the geometric insights gained here carry over.
For ,
the column space is the -axis, and the left null space is the -plane since the condition reduces to . Spanning vector sets for and can be computed as follows, confirming the previous geometric descriptions. Note that combining the two leads to the identity matrix, an observation whose significance will soon become apparent.
∴ |
A=[1; 0; 0]; colspace(A) |
(6)
∴ |
nullspace(A') |
(7)
∴ |
[colspace(A) nullspace(A')] |
(8)
∴ |
For ,
the columns of are colinear, , and the column space is the -axis, and the left null space is the -plane, as before.
∴ |
A=[1 -1; 0 0; 0 0]; CA=colspace(A) |
(9)
∴ |
NAt=short.(nullspace(A')) |
(10)
∴ |
[CA NAt] |
(11)
∴ |
For ,
the column space is the -plane, and the left null space is the -axis.
∴ |
A=[1 0; 0 1; 0 0]; CA=colspace(A) |
(12)
∴ |
NAt=short.(nullspace(A')) |
(13)
∴ |
[CA NAt] |
(14)
∴ |
For ,
the same , are obtained, albeit with a different set of spanning vectors returned by colspace.
∴ |
A=[1 1; 1 -1; 0 0]; CA=colspace(A) |
(15)
∴ |
NAt=short.(nullspace(A')) |
(16)
∴ |
[CA NAt] |
(17)
∴ |
For ,
Since , the orthogonality condition is satisfied by vectors of form , .
∴ |
A=[1 1 3; 1 -1 -1; 1 1 3]; CA=colspace(A) |
(18)
∴ |
NAt=short.(nullspace(A')) |
(19)
∴ |
[CA NAt] |
(20)
∴ |