Skip to main content

Section 8.5 Week 8 Activity

Subsection 8.5.1 Calculating Determinants

Activity 8.5.1.

Calculate the determinant of this matrix. Show the steps in the cofactor expansion.
\begin{equation*} \left( \begin{matrix} 4 \amp 0 \amp -8 \\ -1 \amp -2 \amp 3 \\ 5 \amp 0 \amp 0 \end{matrix} \right) \end{equation*}
Solution.
I do cofactor expansion along the second column to take advantage of the zeros in that column.
\begin{align*} \amp \left| \begin{matrix} 4 \amp 0 \amp -8 \\ -1 \amp -2 \amp 3 \\ 5 \amp 0 \amp 0 \end{matrix} \right| \\ \amp = (-1)(0) \left| \begin{matrix} -1 \amp 3 \\ 5 \amp 0 \end{matrix} \right| + (1)(-2) \left| \begin{matrix} 4 \amp -8 \\ 5 \amp 0 \end{matrix} \right| + (-1)(0) \left| \begin{matrix} 4 \amp -8 \\ -1 \amp 3 \end{matrix} \right| + \end{align*}
The terms multiplied by zero are obviously zero. For the middle term, I use the formula for a \(2 \times 2\) determinant.
\begin{align*} \amp = 0 - 2 \left[ (4)(0) - (5)(-8) \right] + 0 \\ \amp = -2 (40) = -80 \end{align*}

Activity 8.5.2.

Calculate the determinant of this matrix. Show the steps in the cofactor expansion.
\begin{equation*} \left( \begin{matrix} 0 \amp -2 \amp -2 \\ 1 \amp -6 \amp 1 \\ -3 \amp 0 \amp 1 \end{matrix} \right) \end{equation*}
Solution.
I do cofactor expansion along the first row.
\begin{align*} \amp \left| \begin{matrix} 0 \amp -2 \amp -2 \\ 1 \amp -6 \amp 1 \\ -3 \amp 0 \amp 1 \end{matrix} \right| \\ \amp = (1)(0) \left| \begin{matrix} -6 \amp 1 \\ 0 \amp 1 \end{matrix} \right| + (-1)(-2) \left| \begin{matrix} 1 \amp 1 \\ -3 \amp 1 \end{matrix} \right| + (1)(-2) \left| \begin{matrix} 1 \amp -6 \\ -3 \amp 0 \end{matrix} \right| \end{align*}
The term multiplied by zero goes away. For the other two terms, I use the formula for \(2 \times 2\) determinants.
\begin{align*} \amp = 0 + 2 \left[ (1)(1) - (1)(-3) \right] + (-2) \left[ (1)(0) - (-3)(-6) \right] \\ \amp = 0 + 8 + 36 = 44 \end{align*}

Activity 8.5.3.

Calculate the determinant of this matrix. Show the steps in the cofactor expansion.
\begin{equation*} \left( \begin{matrix} 6 \amp -1 \amp 3 \\ -4 \amp -2 \amp 0 \\ -2 \amp -5 \amp 3 \end{matrix} \right) \end{equation*}
Solution.
I do cofactor expansion along the third column, to take advantage of the zero term.
\begin{align*} \amp \left| \begin{matrix} 6 \amp -1 \amp 3 \\ -4 \amp -2 \amp 0 \\ -2 \amp -5 \amp 3 \end{matrix} \right| \\ \amp = (1)(3) \left| \begin{matrix} -4 \amp -2 \\ -2 \amp -5 \end{matrix} \right| + (-1)(0) \left| \begin{matrix} 6 \amp -1 \\ -2 \amp -5 \end{matrix} \right| + (1)(3) \left| \begin{matrix} 6 \amp -1 \\ -4 \amp -2 \end{matrix} \right| \end{align*}
The term multiplied by zero goes away. For the other two terms, I use the formula for the determinant of a \(2 \times 2\) matrix.
\begin{align*} \amp = (3) \left[ (-4)(-5) - (-2)(-2) \right] + 0 + (3) \left[ (6)(-2) - (-1)(-4) \right] \\ \amp = 48 - 48 = 0 \end{align*}

Activity 8.5.4.

Calculate the determinant of this matrix. Show the steps in the cofactor expansion.
\begin{equation*} \left( \begin{matrix} 6 \amp 2 \amp -1 \amp 3 \\ 0 \amp 4 \amp 8 \amp 2 \\ 0 \amp 0 \amp -3 \amp 9 \\ 0 \amp 0 \amp 0 \amp -7 \end{matrix} \right) \end{equation*}
Solution.
This is a upper triangular matrix. I know for any diagonal or triangular matrix, the determinant is just the product of the diagonal entries. I don’t need to do cofactor expansion.
\begin{equation*} \left| \begin{matrix} 6 \amp 2 \amp -1 \amp 3 \\ 0 \amp 4 \amp 8 \amp 2 \\ 0 \amp 0 \amp -3 \amp 9 \\ 0 \amp 0 \amp 0 \amp -7 \end{matrix} \right| = (6)(4)(-3)(-7) = 504 \end{equation*}

Activity 8.5.5.

Calculate the determinant of this matrix. Show the steps in the cofactor expansion.
\begin{equation*} \left( \begin{matrix} 4 \amp -1 \amp -1 \amp 6 \\ 0 \amp -3 \amp 3 \amp -2 \\ 0 \amp 1 \amp 1 \amp -2 \\ -2 \amp 0 \amp 0 \amp 3 \end{matrix} \right) \end{equation*}
Solution.
I do cofactor expand along the first column to take advantage of the zero terms.
\begin{align*} \amp \left| \begin{matrix} 4 \amp -1 \amp -1 \amp 6 \\ 0 \amp -3 \amp 3 \amp -2 \\ 0 \amp 1 \amp 1 \amp -2 \\ -2 \amp 0 \amp 0 \amp 3 \end{matrix} \right| \\ \amp = (1)(4) \left| \begin{matrix} -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 3 \end{matrix} \right| + (-1)(0) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 3 \end{matrix} \right| \\ \amp + (1)(0) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ -3 \amp 3 \amp -2 \\ 0 \amp 0 \amp 3 \\ \end{matrix} \right| + (-1)(-2) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \end{matrix} \right| \end{align*}
The two terms multiplied by zero go away. There are two remaining \(3 \times 3\) determinants. I’ll do these separately and then put their determinant values back into the original equation. Here is the first one; I do cofactor expansion along the third row.
\begin{align*} \amp \left| \begin{matrix} -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 3 \end{matrix} \right| \\ \amp = (1)(0) \left| \begin{matrix} 3 \amp -2 \\ 1 \amp -2 \end{matrix} \right| + (-1)(0) \left| \begin{matrix} -3 \amp -2 \\ 1 \amp -2 \end{matrix} \right| + (1)(3) \left| \begin{matrix} -3 \amp 3 \\ 1 \amp 1 \end{matrix} \right| \\ \amp = 0 + 0 + 3 \left[ (-3)(1) - (3)(1) \right] \\ \amp = -18 \end{align*}
Here is the second one; I do cofactor expansion along the first row.
\begin{align*} \amp \left| \begin{matrix} -1 \amp -1 \amp 6 \\ -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \end{matrix} \right| \\ \amp = (1)(-1) \left| \begin{matrix} 3 \amp -2 \\ 1 \amp -2 \end{matrix} \right| + (-1)(-1) \left| \begin{matrix} -3 \amp -2 \\ 1 \amp -2 \end{matrix} \right| + (1)(6) \left| \begin{matrix} -3 \amp 3 \\ 1 \amp 1 \end{matrix} \right| \\ \amp = (-1) \left[ (3)(-2) - (1)(-2) \right] + (1) \left[ (-3)(-2) - (1)(-2) \right] \\ \amp + (6) \left[ (-3)(1) - (1)(3) \right] \\ \amp = 4 + 8 - 36 = -24 \end{align*}
Now I put these back into the full expression.
\begin{align*} \amp \left| \begin{matrix} 4 \amp -1 \amp -1 \amp 6 \\ 0 \amp -3 \amp 3 \amp -2 \\ 0 \amp 1 \amp 1 \amp -2 \\ -2 \amp 0 \amp 0 \amp 3 \end{matrix} \right| \\ \amp = (1)(4) \left| \begin{matrix} -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 3 \end{matrix} \right| + (-1)(0) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ 1 \amp 1 \amp -2 \\ 0 \amp 0 \amp 3 \end{matrix} \right| \\ \amp + (1)(0) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ -3 \amp 3 \amp -2 \\ 0 \amp 0 \amp 3 \\ \end{matrix} \right| + (-1)(-2) \left| \begin{matrix} -1 \amp -1 \amp 6 \\ -3 \amp 3 \amp -2 \\ 1 \amp 1 \amp -2 \end{matrix} \right| \\ \amp = 4 (-18) + 0 + 0 + 2(-24) = -120 \end{align*}

Subsection 8.5.2 Orthogonal Matrices

Activity 8.5.6.

Determine which of these matrices are orthogonal. For the matrices which are not orthogonal, clearly state how they fail the orthogonality criteria.
\begin{align*} a) \amp \begin{pmatrix} 1 \amp 1 \amp -1 \\ -1 \amp 1 \amp -1 \\ 0 \amp 1 \amp 2 \end{pmatrix} \\ b) \amp \begin{pmatrix} \dfrac{1}{\sqrt{38}} \amp \dfrac{3}{\sqrt{19}} \amp \dfrac{1}{\sqrt{2}} \\[0.5em] \dfrac{1}{\sqrt{38}} \amp \dfrac{3}{\sqrt{19}} \amp \dfrac{-1}{\sqrt{2}} \\[0.5em] \dfrac{-6}{\sqrt{38}} \amp \dfrac{1}{\sqrt{19}} \amp 0 \end{pmatrix} \\ c) \amp \begin{pmatrix} \dfrac{2}{\sqrt{6}} \amp \dfrac{3}{\sqrt{29}} \amp \dfrac{-2}{\sqrt{174}} \\[0.5em] \dfrac{-1}{\sqrt{6}} \amp \dfrac{4}{\sqrt{29}} \amp \dfrac{7}{\sqrt{174}} \\[0.5em] \dfrac{1}{\sqrt{6}} \amp \dfrac{-2}{\sqrt{29}} \amp \dfrac{11}{\sqrt{174}} \end{pmatrix} \\ d) \amp \begin{pmatrix} \dfrac{3}{\sqrt{17}} \amp \dfrac{-2}{\sqrt{13}} \amp \dfrac{-6}{\sqrt{173}} \\[0.5em] \dfrac{-2}{\sqrt{17}} \amp 0 \amp \dfrac{-11}{\sqrt{173}} \\[0.5em] \dfrac{-2}{\sqrt{17}} \amp \dfrac{3}{\sqrt{13}} \amp \dfrac{4}{\sqrt{173}} \end{pmatrix} \end{align*}
Solution.
In a), the columns are pairwise orthogonal; I can check that all the dot products are zero.
\begin{gather*} \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix} \cdot \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} = 0 \\ \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix} \cdot \begin{pmatrix} -1 \\ -1 \\ 2 \end{pmatrix} = 0 \\ \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} -1 \\ -1 \\ 2 \end{pmatrix} = 0 \end{gather*}
However, the columns are not unit vectors. The length of the first column is \(\sqrt{2}\text{,}\) the second is \(\sqrt{3} \) and the third if \(\sqrt{6}\text{.}\) An orthogonal matrix has to have columns of length one, so this is not an orthogonal matrix.
In b), the columns are pairwise orthogonal; I can check that all the dot products are zero. (Not that for these checks, I can scale the vectors as I wish, since whether or not the dot product is zero doesn’t depend on the scaling. I’ve scaled the columns to remove the square roots to make these calculations easier.)
\begin{gather*} \begin{pmatrix} 1 \\ 1 \\ -6 \end{pmatrix} \cdot \begin{pmatrix} 3 \\ 3 \\ 1 \end{pmatrix} = 0 \\ \begin{pmatrix} 1 \\ 1 \\ -6 \end{pmatrix} \cdot \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix} = 0 \\ \begin{pmatrix} 3 \\ 3 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} 1 \\ -1 \\ 0 \end{pmatrix} = 0 \end{gather*}
In addition, the column vectors all have length one.
\begin{align*} \begin{vmatrix} \dfrac{1}{\sqrt{38}} \\[0.5em] \dfrac{1}{\sqrt{38}} \\[0.5em] \dfrac{-6}{\sqrt{38}} \end{vmatrix} \amp = \dfrac{1}{\sqrt{38}} \begin{vmatrix} 1 \\ 1 \\ -6 \end{vmatrix} = \dfrac{1}{\sqrt{38}} \sqrt{1^2 + 1^2 + (-6)^2} = \dfrac{\sqrt{38}}{\sqrt{38}} = 1 \\ \begin{vmatrix} \dfrac{3}{\sqrt{19}} \\[0.5em] \dfrac{3}{\sqrt{19}} \\[0.5em] \dfrac{1}{\sqrt{19}} \end{vmatrix} \amp = \dfrac{1}{\sqrt{19}} \begin{vmatrix} 3 \\ 3 \\ 1 \end{vmatrix} = \dfrac{1}{\sqrt{19}} \sqrt{3^2 + 3^2 + 1^2} = \dfrac{\sqrt{19}}{\sqrt{19}} = 1\\ \begin{vmatrix} \dfrac{1}{\sqrt{2}} \\[0.5em] \dfrac{-1}{\sqrt{2}} \\[0.5em] 0 \end{vmatrix} \amp = \dfrac{1}{\sqrt{2}} \begin{vmatrix} 1 \\ -1 \\ 0 \end{vmatrix} = \dfrac{1}{\sqrt{2}} \sqrt{1^2 + (-1)^2 + 0^2} = \dfrac{\sqrt{2}}{\sqrt{2}} = 1 \end{align*}
In c), the columns are pairwise orthogonal; I can check that all the dot products are zero. (Note that for these checks, I can scale the vectors as I wish, since whether or not the dot product is zero doesn’t depend on the scaling. I’ve scaled the columns to remove the square roots to make these calculations easier.)
\begin{gather*} \begin{pmatrix} 2 \\ -1 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} 3 \\ 4 \\ -2 \end{pmatrix} = 0 \\ \begin{pmatrix} 2 \\ -1 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} -2 \\ 7 \\ 11 \end{pmatrix} = 0 \\ \begin{pmatrix} 3 \\ 4 \\ -2 \end{pmatrix} \cdot \begin{pmatrix} -2 \\ 7 \\ 11 \end{pmatrix} = 0 \end{gather*}
In addition, the column vectors all have length one.
\begin{align*} \begin{vmatrix} \dfrac{\sqrt{2}}{\sqrt{6}} \\[0.5em] \dfrac{-1}{\sqrt{6}} \\[0.5em] \dfrac{1}{\sqrt{6}} \end{vmatrix} \amp = \dfrac{1}{\sqrt{6}} \begin{vmatrix} 2 \\ -1 \\ 1 \end{vmatrix} = \dfrac{1}{\sqrt{6}} \sqrt{2^2 + (-1)^2 + 1^2} = \dfrac{\sqrt{6}}{\sqrt{6}} = 1\\ \begin{vmatrix} \dfrac{3}{\sqrt{29}} \\[0.5em] \dfrac{4}{\sqrt{29}} \\[0.5em] \dfrac{-2}{\sqrt{29}} \end{vmatrix} \amp = \dfrac{1}{\sqrt{29}} \begin{vmatrix} 3 \\ 4 \\ -2 \end{vmatrix} = \dfrac{1}{\sqrt{29}} \sqrt{3^2 + 4^2 + (-2)^2} = \dfrac{\sqrt{29}}{\sqrt{29}} = 1\\ \begin{vmatrix} \dfrac{-2}{\sqrt{174}} \\[0.5em] \dfrac{7}{\sqrt{174}} \\[0.5em] \dfrac{11}{\sqrt{174}} \end{vmatrix} \amp = \dfrac{1}{\sqrt{174}} \begin{vmatrix} -2 \\ 7 \\ 11 \end{vmatrix} = \dfrac{1}{\sqrt{174}} \sqrt{(-2)^2 + 7^2 + (11)^2} = \dfrac{\sqrt{174}}{\sqrt{174}} = 1 \end{align*}
In d), the columns are not pairwise orthogonal; I can check to see that this fails with the first and third columns.
\begin{align*} \amp \begin{pmatrix} 3 \\ -2 \\ -2 \end{pmatrix} \cdot \begin{pmatrix} -6 \\ -11 \\ 4 \end{pmatrix} \\ \amp = (3)(-6) + (-2)(-11) + (-2)(-4) = -18 + 22 + 8 = 12 \neq 0 \end{align*}
Since the columns are not orthogonal, this cannot be an orthogonal matrix. (I could check that the columns are unit vectors, which indeed they are, but I don’t need to since I’ve already shown that this cannot match the criteria for othogonality).

Activity 8.5.7.

For this incomplete matrix, add a final column which produces an orthogonal matrix.
\begin{equation*} \begin{pmatrix} \dfrac{5}{\sqrt{41}} \amp \cdot \\[0.5em] \dfrac{4}{\sqrt{41}} \amp \cdot \\ \end{pmatrix} \end{equation*}
Solution.
I simply need to add a vector of length one which is perpendicular to the given column. If I switch the terms in the vector and make one of them negative, I can accomplish this. (This isn’t an algorithmic solution; this is essentially by inspection. I’m looking at the matrix and making a guess based on my familiarity with vectors in \(\RR^2\text{.}\))
\begin{equation*} \begin{pmatrix} \dfrac{5}{\sqrt{41}} \amp \dfrac{-4}{\sqrt{41}} \\[0.5em] \dfrac{4}{\sqrt{41}} \amp \dfrac{5}{\sqrt{41}} \\ \end{pmatrix} \end{equation*}
Note there are two possible answers; if I choose to make the other component negative, the following is another option.
\begin{equation*} \begin{pmatrix} \dfrac{5}{\sqrt{41}} \amp \dfrac{4}{\sqrt{41}} \\[0.5em] \dfrac{4}{\sqrt{41}} \amp \dfrac{-5}{\sqrt{41}} \\ \end{pmatrix} \end{equation*}
These are, however, the only two possible solutions.

Activity 8.5.8.

For this incomplete matrix, add a final column which produces an orthogonal matrix.
\begin{equation*} \begin{pmatrix} \dfrac{1}{\sqrt{2}} \amp \dfrac{-3}{\sqrt{22}} \amp \cdot \\[0.5em] \dfrac{1}{\sqrt{2}} \amp \dfrac{3}{\sqrt{22}} \amp \cdot \\[0.5em] 0 \amp \dfrac{-2}{\sqrt{22}} \amp \cdot \end{pmatrix} \end{equation*}
Solution.
Unlike the previous matrix, this is much harder to do by inspection. Fortunately, I have an algorithmic approach. I want a new vector in \(\RR^3\) which is orthogonal to two given vector (the two columns). In \(\RR^3\text{,}\) the cross product gives exactly this desired vector. I’ll take the cross product of the two columns. For ease of calculation, I’ll scale the vectors to remove the denominators to find this third vector, since all I care about at this point is orthogonality.
\begin{equation*} \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} \times \begin{pmatrix} -3 \\ 3 \\ -2 \end{pmatrix} = \begin{pmatrix} -2 \\ 2 \\ 6 \end{pmatrix} \end{equation*}
This produces a vector othogonal to both columns. Now I want to scale it so that it is unit vector. The length of this vector is \(\sqrt{(-2)^2 + 2^2 + 6^2} = \sqrt{44}\text{,}\) so I divide by this to make a unit vector. That produces this orthogonal matrix.
\begin{equation*} \begin{pmatrix} \dfrac{1}{\sqrt{2}} \amp \dfrac{-3}{\sqrt{22}} \amp \dfrac{-2}{\sqrt{44}} \\[0.5em] \dfrac{1}{\sqrt{2}} \amp \dfrac{3}{\sqrt{22}} \amp \dfrac{2}{\sqrt{44}} \\[0.5em] 0 \amp \dfrac{-2}{\sqrt{2}} \amp \dfrac{6}{\sqrt{44}} \end{pmatrix} \end{equation*}
Note, again, that there are two possible answers. If I did the cross-product in the other order, I would get the same vector multiplied by \((-1)\text{.}\) That produces this other answer.
\begin{equation*} \begin{pmatrix} \dfrac{1}{\sqrt{2}} \amp \dfrac{-3}{\sqrt{22}} \amp \dfrac{2}{\sqrt{44}} \\[0.5em] \dfrac{1}{\sqrt{2}} \amp \dfrac{3}{\sqrt{22}} \amp \dfrac{-2}{\sqrt{44}} \\[0.5em] 0 \amp \dfrac{-2}{\sqrt{2}} \amp \dfrac{-6}{\sqrt{44}} \end{pmatrix} \end{equation*}
These are the only two solutions.

Activity 8.5.9.

For this incomplete matrix, add a final column which produces an orthogonal matrix.
\begin{equation*} \begin{pmatrix} \dfrac{1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \amp \cdot \\[0.5em] 0 \amp \dfrac{1}{\sqrt{2}} \amp 0 \amp \cdot \\ \dfrac{-1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \amp \cdot \\[0.5em] 0 \amp \dfrac{-1}{\sqrt{2}} \amp 0 \amp \cdot \\ \end{pmatrix} \end{equation*}
Solution.
Now I again lack an algorithmic approach, since I don’t have a cross product in \(\RR^4\text{.}\) So I have to try to work by inspection again, which is tricky. Looking at the first and third columns, these are only non-zero in the first and third coordinates. The second column is non-zero only in the second and fourth coordinate. I might try to match the second column and take zeros in the first and third components; that guarantees orthogonality with the first and third vector. Then I need to choose components to be orthogonal to the second column; that can be done by copying the second colum and setting one of the components to be negative. That gives the following answer.
\begin{equation*} \begin{pmatrix} \dfrac{1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \amp 0 \\[0.5em] 0 \amp \dfrac{1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \\[0.5em] \dfrac{-1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \amp 0 \\[0.5em] 0 \amp \dfrac{-1}{\sqrt{2}} \amp 0 \amp \dfrac{1}{\sqrt{2}} \\ \end{pmatrix} \end{equation*}

Subsection 8.5.3 Proof Questions

Activity 8.5.10.

Prove that all rotations in \(\RR^2\) have determinant 1.
Solution.
The general rotation by an angle \(\theta\) in \(\RR^2\) has matrix
\begin{equation*} \begin{pmatrix} \cos \theta \amp -\sin \theta \\ \sin \theta \amp \cos \theta \end{pmatrix}\text{.} \end{equation*}
I can simply calculate the determinant of this matrix.
\begin{equation*} \begin{vmatrix} \cos \theta \amp -\sin \theta \\ \sin \theta \amp \cos \theta \end{vmatrix} = \cos^2 \theta + \sin^2 \theta = 1 \end{equation*}
That completes the proof.

Activity 8.5.11.

Prove that all reflections in \(\RR^2\) have determinant 1.
Solution.
The general reflection in the (unit vector) direction \(\begin{pmatrix} a \\ b \end{pmatrix}\) in \(\RR^2\) has matrix
\begin{equation*} \begin{pmatrix} a^2 - b^2 \amp 2ab \\ 2ab \amp b^2 - a^2 \end{pmatrix}\text{.} \end{equation*}
I can simply calculate the determinant of this matrix.
\begin{align*} \amp \begin{vmatrix} a^2 - b^2 \amp 2ab \\ 2ab \amp b^2 - a^2 \end{vmatrix} = (a^2 - b^2)(b^2 - a^2) - (2ab)(2ab) \\ \amp = -a^4 + 2a^2 b^2 - b^4 - 4a^2 b^2 = -(a^4 + 2a^2b^2 + b^2) = -(a^2 + b^2)^2 \end{align*}
Since the description uses a unit vector for the direction, \(a^2 + b^2 = 1\text{.}\) With that fact, I can see that the determinant is \(-1\text{,}\) which completes the proof.

Activity 8.5.12.

Let \(M\) be a square matrix. Prove that \(\det M = \det M^T\text{.}\)
Solution.
I can argue this fact using cofactor expansion. Since cofactor expansions works along rows or columns, if I do cofactor expansion along a row of \(M\text{,}\) that is the same as cofactor expansion along the matching column of \(M\) (since the transpose turns rows into columns). Since both the row and column are removed in the cofactor expansion, the effect on the matrix \(M\) and its transpose is the same.
Using this cofactor expansion argument, I can reduce the question down to only \(2 \times 2\) matrices. Here, I can simply check by calculating the determinant of a general \(2 \times 2\) matrix and its transpose.
\begin{align*} \begin{vmatrix} a \amp b \\ c \amp d \end{vmatrix} \amp = ad - bc \\ \begin{vmatrix} a \amp c \\ b \amp d \end{vmatrix} \amp = ad - cb = ad -bc \end{align*}
This completes the proof.

Activity 8.5.13.

Prove that the inverse of an orthogonal transformation is also orthogonal.
Solution.
This is relatively easy to argue based on the concept and main definition. Orthogonal matrices preserve lengths. If lengths do not change when I do a transformation forward, they also can’t change when I unto the transformation and go backwards. Undoing no change is still no change.

Activity 8.5.14.

Prove that the composition of orthogonal transformations is still orthogonal.
Solution.
Like the previous proof, I can argue this conceptually using the basic definition. If two transformations preserve lengths, then doing them one after another will also preserve lengths. Lengths can’t change in the first step and can’t change in the second step, so they can’t change at all. Therefore, the composition (doing one transformation after the other) must also be orthogonal.

Activity 8.5.15.

Prove that all orthogonal matrices have determinant \(\pm 1\text{.}\)
Solution.
For this proof, I’m going to use one of the algebraic properties of orthogonal matrices.
\begin{equation*} A^TA = \Id \end{equation*}
Then I’ll take the determinant of both sides of this matrix equation.
\begin{equation*} \det(A^TA) = \det(\Id) \end{equation*}
The determinant of the identity is 1. On the left side, the determinant is multiplicative.
\begin{equation*} \det(A^T)\det(A) = 1 \end{equation*}
One of the properties of the determinant is that the determinant of a transpose is the same at the determinant of the original.
\begin{equation*} \det(A)\det(A) = 1 \implies \det(A)^2 = 1 \end{equation*}
So the number \(\det (A)\) is a number which squares to \(1\text{.}\) The only numbers with this property are \(1\) and \(-1\text{.}\) That completes the proof.

Activity 8.5.16.

Prove that the rotations and the reflections are the only orthogonal transformations in \(\RR^2\text{.}\) (This one is a lot trickier than the previous proofs. Give it a shot, but don’t distress too much if the approach is difficult to find.)
Solution.
This proof is substantially more difficult to approach than the three previous one. There are many transformations in \(\RR^2\text{.}\) I can’t even just prove this for the five basic types, since there are complicated transformations that are combinations of the types and the combination might be orthogonal even when the pieces aren’t. I need to find a trick or idea that lets me approach this.
The idea I thought to use was acting on the standard basis: \(\{e_1, e_2\}\text{.}\) Since these two vectors span all of \(\RR^2\text{,}\) if I know what happens to these two vectors, I know the entire transformation. First, the transformation does something to \(e_1\text{,}\) sending it to \(Me_1\text{.}\) Since length is preserved, this is some other unit vector.
Now where can \(e_2\) go? There are, in fact, only two choices. \(Me_2\) must also be a unit vector, since lengths are preserved. Also, angles are preserved, so the angle between the \(Me_1\) and \(Me_2\) must still be \(\frac{\pi}{2}\) (they are still perpendicular). That gives only two choices. If I face in the direction of \(Me_2\text{,}\) then I can have the unit vector pointing in the perpendicular direction to the right or to the left.
If \(Me_2\) is the perpendicular direction to the left, then \(M\) is a rotation. The relative positions of the two vectors are preserved and they are still unit vectors, so the are moved around the circle by whatever the angle is between \(e_1\) and \(Me_1\text{.}\)
If \(Me_2\) is the perpendicular direction to the right, then \(M\) is a reflection. This is a bit trickier to see, but the line of reflection will be the line halfway between \(e_1\) and \(Me_1\text{.}\) That line will flip \(e_1\) into \(Me_1\) by construction. But the flip will take anything perpendicular on the right and flip it to something perpendicular on the left. So \(e_2\text{,}\) which starts as the perpendicular vector on the right, ends up as the perpendicular direction on the left.
\(M\) must have one of these two behaviours, so \(M\) must be a rotation or a reflection.

Subsection 8.5.4 Conceptual Review Questions

  • What is a determinant and what does it mean?
  • Why is the determinant multiplicative but not linear?
  • What is an orthogonal matrix?
  • Why are there so many equivalent descriptions of an orthogonal matrix?