Skip to main content

Section 9.3 Abstract Vector Spaces

Subsection 9.3.1 Abstract and Euclidean Space

In Subsection 9.1.1, I defined the abstract structure of a group: an idea that ties together patterns of transformations. Here, I want to do much the same. Instead of talking about groups, I want to talk much more holistically about what we’ve accomplished so far in this course. This course began with \(\RR^n\text{,}\) real euclidean space, and I presented it all as geometry: I wanted to understand euclidean space and its structures. What structures have I found? Several, but notably these: addition of vectors; scalar multiplication; linear combinations; linear independence; spans; subspaces; and transformations.
Now I want to abstract these structures. The same way a group was an abstraction to talk about collections of invertible tranformations, I want an abstraction to talk about these linear algebra definitions. What is the pattern that uses these terms, these defnitions? How can I use these ideas as broadly as possible?
This is the main strength of an abstraction: it’s broad application. An abstraction is difficult to understand, being removed from its source and turned into just a pattern, but the abstraction applies more broadly than its source. If I abstract a structure, I can use it in many places, even if it is harder to understand than just a concrete instance.
I’m looking to abstract euclindean space, the space of vectors. What other mathematical things have the same kind of structure? It all leads to this definition.

Definition 9.3.1.

A set \(V\) is an abstract vector space (over the real numbers) if it satisfies three properties.
  1. There is an addition operation on \(V\text{.}\) For \(u,v \in V\text{,}\) I can perform \(u + v\) and get some other element of \(V\text{.}\) This addition is associative and commutative.
  2. There is a scalar multiplication operation on \(V\text{.}\) For \(u \in V\) and for any real number \(a \in \RR\text{,}\) there is way to calculate \(av\) and get some new element of \(V\text{.}\)
  3. The scalar multiplication is distributive over the addition. That is, for any real number \(a \in \RR\) and \(u,v \in V\text{,}\) it is true that \(a(u + v) = au + av\text{.}\)
This is how I extend the idea vector space. The things in \(V\) are usually no longer vectors. But since they have the linear opeartions of addition and scalar multiplication, they act enough like vectors to use the language of linear algebra. Just these properties is enough to talk about linear combinations, linear independence, spans, bases, transformations, image, kernels, and many other terms from the course. This adds great value to everything we’ve done so far in this course, because suddenly there are all these new environments for the use of these terms and ideas. In many way, this definition justifies the important place that linear algebra has in academic mathematics. Vectors and their structures are important, but linear algebra is so central, in great part, because of how ubiquitous its terms are used in other settings.
Before, I get to examples, I must also note that something is lost. In any abstraction process, by abstracting a structure, some particulars are left behind. What is lost here?
  • Mostly obvioulsy, the things in an abstract vector space are no longer vectors: that is, they are not finite lists of numbers and represent positions in space.
  • Losing explicit vectors means losing geometric interpretation, for the most part. The elements of an abstract vector space may no longer have the geometric meaning of vectors. In particular, the whole structure of loci may be gone, or at least radically changed.
  • The matrix interpretation of transformations came from the matrix action on vectors. For other objects, that action might not work any more, or at least not as easily. Linear transformations may no longer be matrices.
  • Since both matrices and geometry are lost, ideas like impact on volume and orientation probably don’t make sense anymore, so determinants are also lost.
  • Finally and most interestingly, the finite dimensional nature of \(\RR^n\) may also be lost. Many asbtract vectors spaces are infinite dimensional, as I’ll show shortly in the examples.

Subsection 9.3.2 Examples of Abstract Vector Spaces

Example 9.3.2.

The set \(P(\RR)\) consists of all polynomial in the variable \(x\) with real coefficients.
  • I can add polynomials: if \(p(x)\) and \(q(x)\) are polynomials, then \(p(x) + q(x)\) is still a polynomial.
  • I can multiply polynomials by scalars: if \(a \in \RR\) is a real number and \(p(x)\) is a polynomial, the \(ap(x)\) is still a polynomial.
  • Since all the opeartions here are inherited from the ordinary arithmetic of numbers and variables, the distributive law of multiplication over addition applies to scalar multiplication and polynomial addition.
Having satisfied the three criteria, this set of polynomials is an abstract vector space. Therefore, I can make claims like the following.
  • The polynomial \(4x^2 - 4\) is in \(\Span \{x+4, x^2 + x + 3 \}\text{.}\) The span is all linear combinations, so I can justify this claim by a demonstration.
    \begin{equation*} 4x^2 + 4 = 4(x^2 + x + 3) - 4(x - 4) \end{equation*}
    This equation shows that \(4x^2 + 4\) is a linear combination of the spanning vectors, therefore a member of the span.
  • The set of polynomials \(\{ x^3 - 4, x^5 - x^2, 3x^4 \}\) is linearly independent. To prove this, I would write the equation \(a(x^3 - 4) + b(x^5 - x^2) + c(3x^4) = 0\) and prove that the only solution is \(a = b = c = 0\text{.}\)
  • The set \(\Span \{x^3, x^7, x^10, x^{12}\}\) has dimension four. To prove this, I would have to prove that the four polynomials are a basis, which I would do by proving that they are linearly independent.
As I have already mentioned, one of the stranger things about abstract vectors spaces is that they can be infinite dimensional. This space of polynomials is the first example of an infinite dimensional vector space. What does that mean? Well, the definition of dimension came from a basis: dimensions is the number of elements in a basis. So, what is a basis for the polynomials?
A basis is a spanning set, so I need some core elements such that any other polymomial can be built out of them. One clear choice are the monomials: \(\{1, x, x^2, x^3, x^4, x^5, \ldots \}\text{.}\) Is this a linearly independent set? Yes, it is. There is no way to get a monomial as a sum of other monomials with different degreres. (For example, there is not way to write \(x^4\) as \(a + bx + cx^2 + dx^3 + ex^5\) and so on. The only way to get the power \(x^4\) from some other monomial is to multiply or divide by the variable, and linear combinations are just multiplicatons by scalars and addition. This is a linearly independent set.
So I have this linear independent set \(\{ 1, x, x^2, x^3, \ldots \}\text{.}\) By writing it this way, I imply that the set never stops, that it includes \(x^n\) for all \(n \in \NN\text{.}\) I imply that it is an infinite set. Does it need to be? Well, if it were finite, then there would be a highest monomial, some \(x^k\text{.}\) Even if \(k\) is a very, very large number, I can still ask about the polynomial \(p(x) = x^{k+1}\text{.}\) There is no limit on the degree of a polynomial. Therefore, no finite subset works. All of these infinitely many monomials are necessary to build all the polynomials. Therefore, they are a basis. Since the basis is infinite, the abstract vector space is infinite dimensional.

Example 9.3.3.

The set \(C^\infty(\RR)\) is the set of all convergent power series in the variable \(x\) with domain \(\RR\text{.}\)
  • I can add power series: if \(\sum_{n=0}^\infty f_x x^n\) and \(\sum_{n=0}^\infty g_x x^n\) are power series, then their addition is \(\sum_{n=0}^\infty (f_n + g_n) x^n\) is still a power series which converges for all or \(\RR\text{.}\)
  • I can multiply power series by scalars: if \(a \in \RR\) is a real number and \(\sum_{n=0}^\infty f_n x^n\) is an infinite series, the \(a\sum_{n=0}^\infty f_n x^n = \sum_{n=0}^\infty af_n x^n\) is still a convergent power series.
  • Since all the opeartions here are inherited from the ordinary arithmetic of numbers and variables, the distributive law of multiplication over addition applies to scalar multiplication and power series addition.
Having satisfied the three criteria, this set of these infinite series is an abstract vector space. As with polynomials, I could make linear combinations and spans, ask about linear independence and bases, and otherwise use the tool of linear algebra to talk about series.
Like the polynomials, this is also an infinite dimensional abstract vector space. Justifying this is not too difficult, since any polynomials is also a series by letting all the highter coefficient be zero, so \(\Span \{ 1, x, x^2, x^3 \ldots, \}\) is already contained within the infinite series. If it contains infinite dimensions already, then it itself must be infinite dimensional. However, actually determining about the basis for series is tricky. First and foremost, why doesn’t \(\Span\{ 1, x, x^2, x^3 \ldots \}\) produce all the infinite series? A series is, after all, a sum \(f(x) = f_0 + f_1x + f_2x^2 + f_3x^3 + f_4x^4 + \ldots \text{.}\) This looks like something I can get from the span of monomials. Can’t I?
I can’t, but the reason is a a strange subtlety, one that I haven’t had to deal with yet. In all of the sums of linear algebra, such as the sums that make liner combinations, the sums must be finite. I never had to worry about this with actual vectors, since I never had to consider infinite sums of vectors. But here, an infinite series is defined by an infinite sum. Therefore, the monomial are not a basis: series are not (finite) linear combinations of monomials.
So what is a basis for the infinite series? It would have to be some infinite set of linearly independent infinite series such that any series could be written as a (finite!) some of some of the basis elements. Such a basis would be very difficult to find.

Example 9.3.4.

The set \(C^0(\RR)\) is the set of all continuous function with domain \(\RR\text{.}\)
  • I can add continuous function: if \(f(x) \) and \(g(x)\) are two continuous functions defined for all real numbers, then \(f(x) + g(x)\) is still a function, still has domain \(\RR\text{,}\) and is still continuous.
  • I can multiply continuous functions by scalars: if \(a \in \RR\) is a real number and \(f(x)\) is a continuous function, then \(af(x)\) is also a continuous function defined for all real number.
  • Since all the operations here are inherited from the ordinary arithmetic of numbers and variables, the distributive law of multiplication over addition applies to scalar multiplication and the addition of continuous functions.
Having satisfied the three criteria, this set of continuous functions is an abstract vector space. As with polynomials and series above, I could make linear combinations and spans, ask about linear independence and bases, and otherwise use the tool of linear algebra to talk about continuous function.
Like the polynomials and infinite series, this is also an infinite dimensional abstract vector space. Again, I can justify this claim simply by the fact that polynomials themselves are continuous function defined on all real numbers. Since there is a subset which is already infinite dimensional, the whole set must also be infinite dimensional.
The basis question here is extremely fraught, even more so than for infinite series. There are many, many, many continuous functions: any path that I can draw in \(\RR^2\) that extends to infinity in both directions and satisfies a vertical line test is the graph of a continuous function. The variety here is, literally, endless. It’s difficult to even consider the idea of a basis. If I were to investigate this question, I would find that this abstract vector space is, in fact, uncountably infinite (for those who know what that means). The question as to whether such an abstract vector space even has a basis is not at all clear: this relates to some very deep and important questions at the very core of mathematics. For more information, ask me sometime about the Axiom of Choice and its implications.
Even though this abstract vector space of continuous functions is unbelievably enormous, considering it as an abstract vector space is still very useful. In particular, the solutions to differential equations are often continuous functions. The space of solution (since there are often infnintely many solutions) is a set of functions. In many instances, this set can be described as a span of finitely many functions: it has a dimension. To describe these solutions, I need a basis: I need to find linearly independent functions to span this solutions space. In this way, the language of linear algebra gets used to describe solutions to differential equations, and solution to differential equations greatly benefit from this description.