Skip to main content

Section 3.4 Proofs - Linear Subspaces

Subsection 3.4.1 Proving Some Properties of Linear Subspaces

Throughout the course, I want to keep developing the idea of mathematical proof. I’ll do two proofs using the material from this section of the course.

Proof.

A linear subspace is a subset of \(\RR^n\) with two properties: closed under vector addition and closed under scalar multiplication. If \(L_1\) and \(L_2\) are subspaces, I have to check that both properties hold for the intersection \(L_1 \cap L_2\text{.}\)
If two vectors \(u\) and \(v\) are in the intersection, then they are contained in both \(L_1\) and \(L_2\text{.}\) Since they are both in \(L_1\) and \(L_1\) is a subspace (meaning closed under vector addition), the sum \(u + v\) is also in \(L_1\text{.}\) The same is true for \(L_2\text{.}\) Therefore, the sum \(u + v\) is in both subspaces, thus in the intersection. Since I made no assumptions on the vectors, this shows the intersection is closed under vector addition.
A very similar argument works for scalar multiplication. If a vector \(u\) is in the intersection (hence in both subspaces) and \(a\) is a scalar, then since each subspace is closed under scalar multiplication, the scalar multiple \(au\) is in both subspaces, hence in the intersection. This shows the intersection is closed under scalar multiplication.

Proof.

I stated that loci were linear or affine subspaces and that loci through the origins were linear subspaces. However, I never proved this. Here, I’m going to prove it for the special case of a hyperplane.
A hyperplane is the locus of one linear equation.
\begin{equation*} a_1x_1 + a_2x_2 + \ldots a_nx_n = c \end{equation*}
However, if the hyperplane contains the origin, the constant \(c\) must be zero. (I stated this earlier; I could argue this by saying that if all the \(x_i\) are set to zero, the left side of the equation is zero, so the right side must also be zero).
\begin{equation*} a_1x_1 + a_2x_2 + \ldots a_nx_n = 0 \end{equation*}
All points on the hyperplane satisfy this equation, and I will use this fact to prove the proposition. To make this easier, I will write the equation in terms of dot products. If \(n\) is the vector with all the \(a_i\) (\(n\) for normal) and \(x\) is the vector with all the \(x_i\text{,}\) then the equation is
\begin{equation*} n \cdot x = 0 \text{.} \end{equation*}
Let \(u\) and \(v\) be vectors on the hyperplane, then \(n \cdot u = 0\) and \(n \cdot v = 0\text{.}\) Then I can consider
\begin{equation*} n \cdot (u + v) \text{.} \end{equation*}
The dot product is linear, so it distributes here.
\begin{equation*} = n \cdot u + n \cdot v \end{equation*}
I just said that both of these dot products are zero, so the equation gives \(0 + 0 = 0\text{.}\) Therefore, \(u + v\) also satisfies the equation of the hyperplane, so is contained in the hyperplane.
I do something very similar with the dot product to proof that the hyperplane is closed under scalar multiplication. If \(a\) is a scalar, consider the vector \(au\) to see if it satisfies the equation of the hyperplane. I start with the dot product with the normal.
\begin{equation*} n \cdot (au) \end{equation*}
By the properties of the dot product, again, I can pull out the scalar.
\begin{equation*} = a (n \cdot u) \end{equation*}
Then I get \(a(0)\text{,}\) since \(u\) is on the hyperplane. This shows that the scalar multiple is also on the hyperplane.