Skip to main content

Section 2.1 Dot Product

Subsection 2.1.1 Definition

Earlier, I said that I couldn’t multiply two vectors together. That’s mostly true: there is no general product of two vectors \(uv\text{,}\) which is still a vector, as least none that has any useful or reasonable geometric meaning. However, there are other kinds of ‘multiplication’ which combine two vectors. The operation defined in this lecture multiplies two vectors, but the result is a scalar.

Definition 2.1.1.

The dot product or inner product or scalar product of two vectors \(u\) and \(v\) is given by the following formula.
\begin{equation*} u \cdot v = \begin{pmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{pmatrix} \cdot \begin{pmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{pmatrix} = u_1 v_1 + u_2 v_2 + \ldots + u_n v_n \end{equation*}
I can think of the dot product as a scalar measure of the similarity of direction between the two vectors. If the two vectors point in a similar direction, their dot product is large, but if they point in very different directions, their dot product is small. This is a bit strange, since there already is a measure, at least in \(\RR^2\text{,}\) of this difference: the angle between two vectors. Thankfully, the two measures of difference agree, and the dot product can be expressed in terms of angles.

Definition 2.1.2.

The angle \(\theta\) between two non-zero vectors \(u\) and \(v\) in \(\RR^n\) is given by the follow equation.
\begin{equation*} \cos \theta = \frac{u \cdot v}{|u||v|} \end{equation*}
This definition agrees with the angles in \(\RR^2\) and \(\RR^3\text{,}\) which I can visualize. However, this serves as a new definition for angles between vectors in all \(\RR^n\) when \(n \geq 4\text{.}\) Since I can’t visualize those spaces, I don’t have a way of drawing angles and calculating them with conventional trigonometry. This definition allows me to extend angles in a completely algebraic way. Notes that this angle \(\theta \in [0, \pi]\text{,}\) since I can always choose the smallest possible angle between two vectors. This means that the inverse cosine is unique for this calculation.

Definition 2.1.3.

Two vectors \(u\) and \(v\) in \(\RR^n\) are called orthogonal or perpendicular or normal if \(u \cdot v = 0\text{.}\)

Subsection 2.1.2 Properties of the Dot Product

There are many pairs of orthogonal vectors. Thinking of the dot product as a multiplication, I have uncovered a serious difference between the dot product and the conventional multiplication of numbers. If \(a,b\in \RR\text{,}\) then \(ab=0\) implies that one of \(a\) or \(b\) must be zero. For vectors, it can be true that \(\begin{pmatrix} 1 \\ 0 \end{pmatrix} \cdot \begin{pmatrix} 0 \\ 1 \end{pmatrix} = 0\) even though neither factor in the product is the zero vector. Here is a definition to keep track of this new property.

Definition 2.1.4.

Assume \(A\) is a set with addition and some kind of multiplication. Also assume that \(0 \in A\text{.}\) If \(u \neq 0\) and \(v \neq 0\) but \(uv =0\text{,}\) then \(u\) and \(v\) are called zero divisors.
An important property of ordinary numbers, as I just noted, is that there are no zero divisors. Other algebraic structures, such as vectors with the dot product, may have many zero divisors.
Now that I have defined a new operation, it is useful to see how it interacts with previously defined structures. I’ll take up the proofs of these in Section 2.4 and in the activity for this week.
In \(\RR^2\text{,}\) norms and dot products allow me to recreate some well-known geometric constructions. For example, now that I have defined lengths and angles, I can state the cosine law in terms of vectors. The visualization of the vector relationships of the cosine law is shown in Figure 2.1.6.
Figure 2.1.6. The Cosine Law

Proof.

To prove the cosine law using vectors, I’m going to interpret the left side, \(|u-v|^2\text{,}\) as \((u-v) \cdot (u-v)\text{,}\) using the second identity in Proposition 2.1.5. Then I can go through some algebraic manipulation with the dot product. First, I’ll distribute the dot product.
\begin{equation*} |u-v|^2 = (u-v) \cdot (u-v) = u \cdot u - v \cdot u - u \cdot v + v \cdot v \end{equation*}
Then I can interchange the order of the second term, since the dot product is commutative.
\begin{equation*} |u-v|^2 = u \cdot u - u \cdot v - u \cdot v + v \cdot v \end{equation*}
Then I can group the middle terms.
\begin{equation*} |u-v|^2 = u \cdot u - 2u \cdot v + v \cdot v \end{equation*}
Then I can interpret the first and last terms as length squared.
\begin{equation*} |u-v|^2 = |u|^2 - 2u \cdot v + |v|^2 \end{equation*}
Then I can reorder the terms.
\begin{equation*} |u-v|^2 = |u|^2 + |v|^2 - 2u \cdot v \end{equation*}
Finally, using the definition of angle, I can replace the dot product with \(|u||v| \cos \theta\text{,}\) where \(\theta\) is the angle between the two vectors.
\begin{equation*} |u-v|^2 = |u|^2 + |v|^2 - 2|u||v| \cos \theta \end{equation*}
This is the cosine law.