Skip to main content

Section 10.3 Week 10 Activity

Subsection 10.3.1 Eigenvalues and Eigenvectors

Activity 10.3.1.

Calculate the eigenvalue and eigenvectors of this matrix. (Give the solutions in exact values).
\begin{equation*} \left( \begin{matrix} -2 \amp 0 \amp 1 \\ 0 \amp -2 \amp 1 \\ 1 \amp 1 \amp 0 \end{matrix} \right) \end{equation*}
Solution.
I follow the algorithm for eigenvalue and eigenvectors. The first step is writing the matrix \(A - \lambda \Id\text{.}\)
\begin{equation*} \left( \begin{matrix} -2 - \lambda \amp 0 \amp 1 \\ 0 \amp -2 - \lambda \amp 1 \\ 1 \amp 1 \amp - \lambda \end{matrix} \right) \end{equation*}
Then I calculate the determinant of this matrix. I use cofactor expansion on the first row.
\begin{align*} \amp \left| \begin{matrix} -2 - \lambda \amp 0 \amp 1 \\ 0 \amp -2 - \lambda \amp 1 \\ 1 \amp 1 \amp - \lambda \end{matrix} \right| \\ \amp = (1)(-2 - \lambda) \left| \begin{matrix} -2 - \lambda \amp 1 \\ 1 \amp -\lambda \end{matrix} \right| + (-1)(0) \left| \begin{matrix} 0 \amp 1 \\ 1 \amp -\lambda \end{matrix} \right| + (1)(1) \left| \begin{matrix} 0 \amp -2 - \lambda \\ 1 \amp 1 \end{matrix} \right| \\ \amp = (-2 - \lambda) \left[ (-2 - \lambda)(-\lambda) - 1 \right] + 0 + \left[ (0)(1) - (-2 - \lambda)(1) \right] \\ \amp = (-2 - \lambda)(\lambda^2 + 2 \lambda - 1) + 2 + \lambda \\ \amp = -\lambda^3 - 2\lambda^2 - 2\lambda^2 - 4\lambda + \lambda + 2 + 2 + \lambda \\ \amp = -\lambda^3 - 4\lambda^2 - 2\lambda + 4 \end{align*}
By inspection or by computer, I can see that \(\lambda = -2\) is a root, so this polynomial factors as follows.
\begin{equation*} -\lambda^3 - 4\lambda^2 - 2\lambda + 4 = -(\lambda + 2)(\lambda^2 + 2 \lambda - 2) \end{equation*}
The quadratic piece does not factor into integer roots. The quadratic formula gives the other roots as \(\lambda = -1 \pm \sqrt{3}\text{.}\) Now I calculate the kernel of each \(A - \lambda Id\) to find the matching eigenvectors. I’ll start with \(\lambda = -2\text{.}\)
\begin{align*} \amp \left( \begin{array}{ccc|c} 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 1 \amp 1 \amp 2 \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left( \begin{array}{ccc|c} 1 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. \(y\) is free and I can interpret the row of the matrix as expressions for \(x\) and \(z\) in terms of \(y\text{.}\) (Though, in this matrix, the second line simply indicatres that \(z = 0\text{.}\)) By choosing a specific value of \(y\text{,}\) I can get specific values of \(x\text{,}\) and thus a specific eigenvector. I choose \(y = -1\) for this particular system.
\begin{align*} \amp \Ker \left( \begin{matrix} 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 1 \\ 1 \amp 1 \amp 2 \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 1 \\ -1 \\ 0 \end{matrix} \right) \right\} \end{align*}
That gives me an eigenvector for \(\lambda = -2\text{.}\) Next, I’ll look at \(\lambda = -1 + \sqrt{3}\text{.}\)
\begin{align*} \amp \left( \begin{array}{ccc|c} -1 - \sqrt{3} \amp 0 \amp 1 \amp 0 \\ 0 \amp -1 - \sqrt{3} \amp 1 \amp 0 \\ 1 \amp 1 \amp 1 - \sqrt{3} \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left( \begin{array}{ccc|c} 1 \amp 0 \amp \frac{-1}{1 + \sqrt{3}} \amp 0 \\ 0 \amp 1 \amp \frac{-1}{1 + \sqrt{3}} \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. \(z\) is free and I can interpret the row of the matrix as expressions for \(x\) and \(y\) in terms of \(z\text{.}\) By choosing a specific value of \(z\text{,}\) I can get specific values of \(x\) and \(y\text{,}\) and thus a specific eigenvector. I choose \(z = 1 + \sqrt{3}\) for this particular system, to clear the denominator.
\begin{align*} \amp \Ker \left( \begin{matrix} -1 - \sqrt{3} \amp 0 \amp 1 \\ 0 \amp -1 - \sqrt{3} \amp 1 \\ 1 \amp 1 \amp 1 - \sqrt{3} \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 1 \\ 1 \\ 1 + \sqrt{3} \end{matrix} \right) \right\} \end{align*}
That gives me an eigenvector for \(\lambda = -1 + \sqrt{3}\text{.}\) Next, I’ll look at \(\lambda = -1 - \sqrt{3}\text{.}\)
\begin{align*} \amp \left( \begin{array}{ccc|c} -1 + \sqrt{3} \amp 0 \amp 1 \amp 0 \\ 0 \amp -1 + \sqrt{3} \amp 1 \amp 0 \\ 1 \amp 1 \amp 1 + \sqrt{3} \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left( \begin{array}{ccc|c} 1 \amp 0 \amp \frac{-1}{1 - \sqrt{3}} \amp 0 \\ 0 \amp 1 \amp \frac{-1}{1 - \sqrt{3}} \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. \(z\) is free and I can interpret the row of the matrix as expressions for \(x\) and \(y\) in terms of \(z\text{.}\) By choosing a specific value of \(z\text{,}\) I can get specific values of \(x\) and \(y\text{,}\) and thus a specific eigenvector. I choose \(z = -1 - \sqrt{3}\) for this particular system, to clear the denominator.
\begin{align*} \amp \Ker \left( \begin{matrix} -1 + \sqrt{3} \amp 0 \amp 1 \\ 0 \amp -1 + \sqrt{3} \amp 1 \\ 1 \amp 1 \amp 1 + \sqrt{3} \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 1 \\ 1 \\ 1 - \sqrt{3} \end{matrix} \right) \right\} \end{align*}
The spanning vector is an eigenvector for \(\lambda = -1 - \sqrt{3}\text{.}\)

Activity 10.3.2.

Calculate the eigenvalue and eigenvectors of this matrix. (Give the solutions in exact values).
\begin{equation*} \left( \begin{matrix} 0 \amp -6 \amp 0 \amp 6 \\ 0 \amp -2 \amp 0 \amp 2 \\ 0 \amp 0 \amp 2 \amp 2 \\ 0 \amp 0 \amp 0 \amp 2 \end{matrix} \right) \end{equation*}
Solution.
I follow the algorithm for eigenvalue and eigenvectors. The first step is writing the matrix \(A - \lambda \Id\text{.}\)
\begin{equation*} \left( \begin{matrix} -\lambda \amp -6 \amp 0 \amp 6 \\ 0 \amp -2 - \lambda \amp 0 \amp 2 \\ 0 \amp 0 \amp 2 - \lambda \amp 2 \\ 0 \amp 0 \amp 0 \amp 2 - \lambda \end{matrix} \right) \end{equation*}
Then I calculate the determinant of this matrix. This is an upper triangular matrix, so the determinant is the product of the diagonal entries.
\begin{equation*} \left| \begin{matrix} -\lambda \amp -6 \amp 0 \amp 6 \\ 0 \amp -2 - \lambda \amp 0 \amp 2 \\ 0 \amp 0 \amp 2 - \lambda \amp 2 \\ 0 \amp 0 \amp 0 \amp 2 - \lambda \end{matrix} \right| = -\lambda(-2 - \lambda)(2 - \lambda)^2 \end{equation*}
This already in factored form, so I see that the eigenvalue are \(\lambda = 0\text{,}\) \(\lambda = 2\) and \(\lambda = -2\text{.}\) For each eigenvalue, I calculate the kernel of \(A - \lambda \Id\) to find the eigenvalues. I’ll start with \(\lambda = 0\text{.}\)
\begin{align*} \amp \left( \begin{array}{cccc|c} 0 \amp -6 \amp 0 \amp 6 \amp 0 \\ 0 \amp -2 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 2 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 2 \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left( \begin{array}{cccc|c} 0 \amp 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. In this matrix, \(w\) is free, but all other coordinates are zero. I choose \(w = 1\) for this particular system to get a unique eigenvector.
\begin{align*} \amp \Ker \left( \begin{matrix} 0 \amp -6 \amp 0 \amp 6 \\ 0 \amp -2 \amp 0 \amp 2 \\ 0 \amp 0 \amp 2 \amp 2 \\ 0 \amp 0 \amp 0 \amp 2 \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 1 \\ 0 \\ 0 \\ 0 \end{matrix} \right) \right\} \end{align*}
That gives me an eigenvector for \(\lambda = 0\text{.}\) Next, I’ll look at \(\lambda = 2\text{.}\)
\begin{align*} \amp \left( \begin{array}{cccc|c} -2 \amp -6 \amp 0 \amp 6 \amp 0 \\ 0 \amp -4 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left| \begin{array}{cccc|c} 1 \amp 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. In this matrix, \(y\) is free and all the other variables are zero. I choose \(y = 1 \) for this particular system to get a unique eigenvector.
\begin{align*} \amp \Ker \left( \begin{matrix} -2 \amp -6 \amp 0 \amp 6 \\ 0 \amp -4 \amp 0 \amp 2 \\ 0 \amp 0 \amp 0 \amp 2 \\ 0 \amp 0 \amp 0 \amp 0 \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 0 \\ 0 \\ 1 \\ 0 \end{matrix} \right) \right\} \end{align*}
That gives me an eigenvector for \(\lambda = 2\text{.}\) Next, I’ll look at \(\lambda = -2\text{.}\)
\begin{align*} \amp \left( \begin{array}{cccc|c} 2 \amp -6 \amp 0 \amp 6 \amp 0 \\ 0 \amp 0 \amp 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 4 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 4 \amp 0 \end{array} \right) \end{align*}
I row reduce to find the kernel.
\begin{align*} \amp \left| \begin{array}{cccc|c} 1 \amp -3 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{array} \right) \end{align*}
To show the kernel, I need to express the solution indicated by this matrix in terms of parameters. \(x\) is free and I can interpret the row of the matrix as expressions for the other variables in terms of \(x\text{.}\) However, two of the lines simply indicate that \(y\) and \(z\) are zero, so only \(w\) depends on \(x\) I choose \(x = 1 \) for this particular system.
\begin{align*} \amp \Ker \left( \begin{matrix} 2 \amp -6 \amp 0 \amp 6 \\ 0 \amp 0 \amp 0 \amp 2 \\ 0 \amp 0 \amp 4 \amp 2 \\ 0 \amp 0 \amp 0 \amp 4 \end{matrix} \right) = \Span \left\{ \left( \begin{matrix} 3 \\ 1 \\ 0 \\ 0 \end{matrix} \right) \right\} \end{align*}
The spanning vector is an eigenvector for \(\lambda = -2\text{.}\)

Activity 10.3.3.

Calculate the eigenvalues and eigenvectors of this matrix. Use approximate values (to three significant figures), but show all the steps in the algorithm.
\begin{equation*} \left( \begin{matrix} 0.294 \amp 0.343 \amp 0.945 \amp 0 \\ 0.812 \amp 0 \amp 0.310 \amp 0.050 \\ 0.110 \amp 0.745 \amp 0.901 \amp 0.874 \\ 0.314 \amp 0 \amp 0 \amp 0.041 \end{matrix} \right) \end{equation*}
Solution.
I ask a computer for the determinant of \(A - \lambda Id\text{.}\)
\begin{align*} \amp \left| \begin{matrix} 0.294 - \lambda \amp 0.343 \amp 0.945 \amp 0 \\ 0.812 \amp - \lambda \amp 0.310 \amp 0.050 \\ 0.110 \amp 0.745 \amp 0.901 - \lambda \amp 0.874 \\ 0.314 \amp 0 \amp 0 \amp 0.041 - \lambda \end{matrix} \right| \\ \amp = x^4 - 1.235 x^3 - 0.300x^2 - 0.515x - 0.025 \end{align*}
I ask a computer for the roots of this polynomial. There are two real roots: \(\lambda = 1.622\) and \(\lambda = -0.049\text{.}\) I calculate the kernel of \(A - \lambda \Id\) for both eigenvalues. I start with \(\lambda = 1.622\)
\begin{align*} \amp \Ker \left( \begin{matrix} -1.328 \amp 0.343 \amp 0.945 \amp 0 \\ 0.812 \amp -1.622 \amp 0.310 \amp 0.050 \\ 0.110 \amp 0.745 \amp -0.721 \amp 0.874 \\ 0.314 \amp 0 \amp 0 \amp -1.581 \end{matrix} \right) \\ \amp = \Span \left\{ \left( \begin{matrix} 5.035 \\ 3.651 \\ 5.752 \\ 1 \end{matrix} \right) \right\} \end{align*}
The spanning vector is an eigenvector for \(\lambda = 1.622\text{.}\) Then I do the same for \(\lambda = -0.049\text{.}\)
\begin{align*} \amp \Ker \left( \begin{matrix} 0.343 \amp 0.343 \amp 0.945 \amp 0 \\ 0.812 \amp 0.049\amp 0.310 \amp 0.050 \\ 0.110 \amp 0.745 \amp 0.950 \amp 0.874 \\ 0.314 \amp 0 \amp 0 \amp 0.090 \end{matrix} \right) \\ \amp = \Span \left\{ \left( \begin{matrix} -0.286 \\ -2.351 \\ 0.957 \\ 1 \end{matrix} \right) \right\} \end{align*}
This is an eigenvalue for \(\lambda = -0.049\text{.}\)

Activity 10.3.4.

Calculate the eigenvalues and eigenvectors of this matrix. Use approximate values (to three significant figures), but show all the steps in the algorithm.
\begin{equation*} \left( \begin{matrix} 2.30 \amp -0.29 \amp -0.92 \amp -3.17 \\ -1.01 \amp 1.31 \amp 3.51 \amp 4.52 \\ -3.68 \amp -5.42 \amp 8.54 \amp -4.62 \\ -9.54 \amp -3.33 \amp -7.54 \amp 2.50 \end{matrix} \right) \end{equation*}
Solution.
I ask a computer for the determinant of \(A - \lambda Id\text{.}\)
\begin{align*} \amp \left| \begin{matrix} 2.30 - \lambda \amp -0.29 \amp -0.92 \amp -3.17 \\ -1.01 \amp 1.31 - \lambda \amp 3.51 \amp 4.52 \\ -3.68 \amp -5.42 \amp 8.54 - \lambda \amp -4.62 \\ -9.54 \amp -3.33 \amp -7.54 \amp 2.50 - \lambda \end{matrix} \right| \\ \amp = x^4 - 14.65 x^3 - 29.54 x^2 - 35.81 x + 73.34 \end{align*}
I ask a computer for the roots of this polynomial. There are two real roots: \(\lambda = 12.47\) and \(\lambda = 2.26\text{.}\) I calculate the kernel of \(A - \lambda \Id\) for both eigenvalues. I start with \(\lambda = 12.47\)
\begin{align*} \amp \Ker \left( \begin{matrix} -10.17\amp -0.29 \amp -0.92 \amp -3.17 \\ -1.01 \amp -11.16 \amp 3.51 \amp 4.52 \\ -3.68 \amp -5.42 \amp -4.02 \amp -4.62 \\ -9.54 \amp -3.33 \amp -7.54 \amp -9.97 \end{matrix} \right) \\ \amp = \Span \left\{ \left( \begin{matrix} -0.22 \\ 0.08 \\ -1.09 \\ 1 \end{matrix} \right) \right\} \end{align*}
The spanning vector is an eigenvector for \(\lambda = 12.47\text{.}\) Then I do the same for \(\lambda = 2.26\text{.}\)
\begin{align*} \amp \Ker \left( \begin{matrix} 0.04 \amp -0.29 \amp -0.92 \amp -3.17 \\ -1.01 \amp -0.95 \amp 3.51 \amp 4.52 \\ -3.68 \amp -5.42 \amp 6.28 \amp -4.62 \\ -9.54 \amp -3.33 \amp -7.54 \amp 0.24 \end{matrix} \right) \\ \amp = \Span \left\{ \left( \begin{matrix} 3.15 \\ -5.01 \\ -1.74 \\ 1 \end{matrix} \right) \right\} \end{align*}
This is an eigenvalue for \(\lambda = 2.26\text{.}\)

Subsection 10.3.2 Eigenfunctions

Activity 10.3.5.

Let \(n \in \NN\text{.}\) Prove that multiplication by \(x^n\) is a linear transformation on the space \(C^{\infty}(\RR)\) of differentiable function.
Solution.
I need to check the two properties of linearity. First I look at addition of function.
\begin{equation*} x^n (f(x) + g(x)) \end{equation*}
I can just use the ordinary distribution law. This applies to function since it is a pointwise operation.
\begin{equation*} x^n (f(x) + g(x)) = x^n f(x) + x^n g(x) \end{equation*}
That’s already one of the two properties. Now I look at scalar multiplication.
\begin{equation*} x^n (af(x)) \end{equation*}
This is just a series of multiplication of functions and constants. I know from the study of function that I do these in which ever order I wish. Let me change the order and the brackets.
\begin{equation*} x^n (af(x)) = a (x^n f(x)) \end{equation*}
This is the second property finish. Multiplication \(x^n\) is a linear transformation.

Activity 10.3.6.

Calculate the eigenfunction of the linear operator \(\frac{x^2}{30} \frac{d^2}{dx^2}\) with eigenvalue \(\lambda = 1\text{..}\) (Hint: the solution is some kind of polynomial. Do this by inspection or by trial and error: there isn’t a general algorithm here to rely on.)
Solution.
I know I’m looking for a polynomial here. First, let me think about monomials. The second derivative of a monomial \(x^n\) is \(n(n-1)x^{n-2}\text{.}\) Then multiplying by \(\frac{x^2}{30}\) produces \(\frac{n(n-1)x^n}{30}\text{.}\) This is only going to equal the original (eigenvalue of 1) if \(\frac{n(n-1)}{30} = 1\text{.}\) The only natural number that satisfies this is \(n = 6\text{.}\) Therefore, \(p(x) = ax^6\) is an eigenfunction for this operator, for any constant \(a\text{.}\)

Activity 10.3.7.

Consider the linear transformation \(\frac{d^4}{dx^4}\) on \(C^\infty(\RR)\text{.}\) Verify that the function \(f(x) = a \sin (\sqrt{3}x) + b \cos (\sqrt{3}x)\) is an eigenfunction for this transformation with eigenvalue \(\lambda = 9\text{.}\) Explain why the square root terms are necessary inside the trig functions. Are their eigenfunctions that don’t involve trig for this transformation?
Solution.
To verify, I just need to do four derivative of the function. Here are those four derivatives.
\begin{align*} \frac{d}{dx} a \sin (\sqrt{3} x) + b \cos (\sqrt{3} x) \amp = \sqrt{3} a \cos (\sqrt{3} x) - \sqrt{3} b \sin (\sqrt{3} x) \\ \frac{d^2}{dx^2} a \sin (\sqrt{3} x) + b \cos (\sqrt{3} x) \amp = -3a \sin (\sqrt{3}x) - 3b \cos (\sqrt{3} x)\\ \frac{d^3}{dx^3} a \sin (\sqrt{3} x) + b \cos (\sqrt{3} x) \amp = -3\sqrt{3} a \cos (\sqrt{3}x) + 3\sqrt{3} b \sin (\sqrt{3}x) \\ \frac{d^4}{dx^4} a \sin (\sqrt{3} x) + b \cos (\sqrt{3} x) \amp = 9 a \sin (\sqrt{3} x) + 9b \cos (\sqrt{3} x) \end{align*}
This is, indeed, exactly 9 times the original function, so the eigenfunction is verified.
As the previous calculation shows, the \(\sqrt{3}\) is necessary. In each derivative, the derivative of the inside is multiplied by the function. So, after four derivative, I have multiplied by \(\sqrt{3}\) 4 times. That’s precisely the same as multiplying by 9, which was the desired eigenvalue. The square root inside is the only way to produce this eigenvalue out of successive derivatives.
There are other eigenfunctions for this transformation and eigenvalue. Among them are \(\cosh (\sqrt{3}x)\text{,}\) \(\sinh (\sqrt{3}x)\) and \(e^{\sqrt{3}x}\text{.}\)

Activity 10.3.8.

Prove that any polynomial is an eigenfunction for some derivative transformation with eigenvalue \(\lambda = 0\text{.}\)
Solution.
The key idea here is that if I take more and more higher derivatives, eventuall the result is zero. So, let \(p\) be some polynomial. The polynomial has a degree \(n\text{,}\) which is the highest polynomial power. Each derivative drops the degree of a polynomial by one. That means after \(n\) derivative, the degree of the polynomial is zero, a constant. One more derivative gives zero, since the derivative of a constant is zero. This means that the operator \(\frac{d^{n+1}}{dx^{n+1}}\) will send the polynomial to zero. That’s the definition of an eigenfunction with eigenvalue 0: it gets sent to zero. Therefore, \(p\) is an eigenfunction of \(\frac{d^{n+1}}{dx^{n+1}}\) with eigenvalue 0.

Subsection 10.3.3 Proof Questions

Activity 10.3.9.

Prove that a square matrix \(A\) is invertible if and only if \(\lambda = 0\) is not an eigenvalue of \(A\text{.}\)
Solution.
There are a variety of ways to argue this. First, I could say that invertible matrices have to preserve the dimension of the space. They can’t have any directions which are collapsed to zero. But an eigenvalue of \(\lambda = 0\) is precisely a direction which is collapsed to zero. That’s what sending \(v\) to \(\lambda v = 0 v = 0 \) does to the eigenvector. Therefore, invertible matrices can’t have \(0\) as an eigenvalue. To prove the other part of the implication, a non-invertible matrix must collapse at least one vector to zero. By linearity, then, the whole line is collapsed to zero, and any vector on that line is an eigenvector with eigenvalue \(0\text{,}\) showing that an invertible matrix must have \(0\) as an eigenvalue.
Alternatively, I could use a property like the fact that the determinant of a matrix is the product of the eigenvalues. With this fact, the determinant is zero if and only if one of the eigenvalues is zero. But a matrix is invertible if and only if its determinant is not zero. Putting these together, a matrix is invertible if and only if \(0\) is not an eigenvalue.

Activity 10.3.10.

Let \(A\) be an invertible matrix. Prove that \(\lambda\) is an eigenvalue of \(A\) if and only if \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{-1}\text{.}\)
Solution.
I have to prove both implications. Let me start with the forward implication. I assume that \(\lambda\) is an eigenvalue, so there exists a vector \(v\) with
\begin{equation*} Av = \lambda v \end{equation*}
Now I apply \(A^{-1}\) to the equation.
\begin{equation*} A^{-1}Av = A^{-1} \lambda v \end{equation*}
On the left, the matrices cancel, and I just have \(v\text{.}\) on the first I can use linearity.
\begin{equation*} v = \lambda A^{-1} v \end{equation*}
Then I can divide by \(\lambda\text{.}\)
\begin{equation*} \frac{1}{\lambda} v = A^{-1} v \end{equation*}
This is the definition of an eigenvalue, so \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{-1}\)
For the reverse implication. I can just reverse the steps. All of the steps of the argument are entirely reversible.

Activity 10.3.11.

Let \(A\) be a \(3 \times 3\) matrix. Prove that \(A\) has at least one real eigenvalue.
Solution.
The eigenvalues are the zeros of the characteristic polynomial \(\det (A - \lambda \Id)\text{.}\) Since this is a \(3 \times 3\) matrix, the polynomial is a cubic. Any cubic has a real root (since cubics are continuous and the limits going to \(\pm \infty\) are positive in one direction and negative in the other -- meaning the graph must cross the axis). Therefore, there is at least one real eigenvalue.

Activity 10.3.12.

Prove that rotations in \(\RR^2\) have no real eigenvalues. (Exclude the trivial rotation, which is just the identity matrix.)
Solution.
I could do this algebraically by applying the algorithm to the generic rotation matrix. However, this is easier to argue conceptually. A rotation takes any direction and moves it by some angle \(\theta\) in a counter-clockwise direction. Therefore, the direction cannot be preserved. Since this is true for any direction, and thus any vector, an eigenvector is impossible. Since an eigenvector is impossible, an eigenvalue is likewise impossible.

Activity 10.3.13.

Prove that reflections in \(\RR^2\) have 1 and \((-1)\) as their two eigenvalues and that their eigenvectors are perpendicular. Also, argue that these two conditions completely describe reflections (i.e., any matrix with these two properties must be a reflection).
Solution.
As with the rotations, this is easier to argue conceptually. A reflection in \(\RR^2\) is a reflection over a line. Anything on the line of reflection doesn’t move. Therefore, these vectors are eigenvectors with eigenvalue \(\lambda = 1\text{.}\) The perpendicular direction to the line is directly flipped. Flipping a vector is the same as multiplication by \(-1\text{,}\) so these vectors are eigenvectors with eigenvalue \(\lambda = -1\text{.}\) Since there can be at most two eigenvalues, this finishes the argument. By construction, the eigenvectors are perpendicular.
If I started with two perpendicular eigenvectors, one with eigenvalue \(\lambda = 1\) and one with eigenvalue \(\lambda = -1\text{,}\) then I can basically repeat the argument to give a reflection. One line is fixed, and the line perpendicular is flipped. Any other vector can be written as a combination of the two eigenvectors: so a piece along the line is preserved, and the piece perpendicular to the line is flipped. This is exactly a reflection over the fixed-line.

Subsection 10.3.4 Conceptual Review Questions

  • What are eigenvalues and eigenvectors?
  • How do determinants help to calculate eigenvalues?
  • What is an eigenspace? What is a spectrum?