Skip to main content

Section 4.5 Regular Singular Points

Let me remind you of the standard form for a homogeneous second order linear equation.
\begin{equation*} y^{\prime \prime} + P(t) y^\prime + Q(t) y = 0 \end{equation*}
Recall that this differential equation has a ordinary point if \(P(t)\) and \(Q(t)\) are analytic at that point. I’ve dealt with solutions at ordinary points; now I will consider solutions at singular points. There are particular types of singular points which are reasonable to deal with, since the functions \(P\) and \(Q\) are quite close to being analytic. I’m going to use a definition from complex variables to understand this situation, since it is a very convenient term for this section.

Definition 4.5.1.

Let \(f(t)\) be a function which is not defined at \(t = t_0\text{.}\) The function has a pole of order \(d \in \NN\) at \(t_0\) if the limit
\begin{equation*} \lim_{t \rightarrow t_0} (t-t_0)^d f(t) \end{equation*}
converges to a finite number, but does not converge for any integer exponent smaller than \(d\text{.}\)
The archetypical example is
\begin{equation*} f(t) = \frac{1}{t^d} \end{equation*}
which has a pole of order \(d\) at \(t=0\text{.}\) However, functions which are not rational functions can also have poles. Having a pole of order \(d\) at a point means that, in some way, the function resembles \(f(t) = \frac{1}{(t-t_0)^d}\) nead the point \(t_0\text{.}\) What do I mean by ‘resembles’? Well, for one, the function \(f(t) = \frac{1}{(t-t_0)^d}\) always has a vertical asymptote at \(t = t_0\text{.}\) Any function with a pole at \(t = t_0\) will also have a vertical asymptote.
Now I can return to the differential equation and give the important definition for this section.

Definition 4.5.2.

A singular point \(t_0\) of this DE is called a regular singular point if the two functions \((t-t_0)P(t)\) and \((t-t_0)^2 Q(t)\) are analytic at \(t = t_0\text{.}\) Equivalently \(P(t)\) can be analytic or have a pole of order 1; and \(Q(t)\) can be analytic, have a pole or order 1, or have a pole of over 2.

Subsection 4.5.1 Examples of Regular Singular Points

Example 4.5.3.

\begin{equation*} y^{\prime \prime} + \frac{y^\prime}{t^3(t-1)^2(t-2)(t-3)(t-4)} + \frac{y}{t^2 (t-1)(t-2)^2 (t-3)^2 (t-4)^3} = 0 \end{equation*}
All points other than \(t=0,1,2,3,4\) are ordinary points. Of the singular points, only \(t=2,3\) are regular singular points. The singular points \(t=0,1\) have \(P(t)\) with a pole of order 2 or higher (a square term or worse in the denominator), which is not allowed. The singular point \(t=4\) has \(Q(t)\) with a pole of order 3 (a cubic term in the denominator), which is also not allowed.

Example 4.5.4.

\begin{equation*} y^{\prime \prime} + \cot t y^\prime + y = 0 \end{equation*}
I use a limit to analyze the singular points. I’ll start wiht \(t=0\text{.}\)
\begin{equation*} \lim_{t \rightarrow 0} t \cot t = \lim_{t \rightarrow 0} \frac{ t \cos t}{\sin t} = 1 \end{equation*}
The limit shows that \(t \cot t\) is analytic at \(t=0\) (equivalently, that \(\cot t\) has a pole of order 1), so \(t=0\) is a regular singular point. Since cotangent is periodic, it’s behaviour at any integer multiple of \(\pi\) will be the same as its behaviour at 0, so all integer multiples of \(\pi\) are regular singular points.

Subsection 4.5.2 The Method of Frobenius

The method of Frobenius is a method of constructing solutions at regular singular points. It relies on this existence theorem.
This is an extension of the idea of analytic solutions. The solution is analytic if \(r \in \NN\text{.}\) Otherwise, it is very close, only differing by this multiple \((t-t_0)^r\text{.}\) If \(r\) is any negative real number, there is an asymptote at \(t=t_0\text{,}\) so I can’t evaluate there. However, I expect convergence on \(0 \lt |t-t_0| \lt R\text{.}\) If \(r\) is a negative integer, there is a name for these series (again, like ‘pole’, coming from complex analysis).

Definition 4.5.6.

A Laurent series is a series of the form
\begin{equation*} \sum_{n=-\infty}^\infty c_n (t-t_0)^n \end{equation*}
where \(c_n \in \RR\text{.}\) This is exactly the form of a Taylor series, but now negative exponents are also allowed.
Now I proceed as usual: I throw the form into the differential equation and see what happens. I expect to have a similar process for finding the coefficient \(c_n\) of the series. However, I also need a process for finding the number \(r\text{.}\) I’ll assume, for convenience in this derivation, that \(t_0 = 0\) is the regular singular point.
I know that \(tP(t)\) and \(t^2 Q(t)\) are analytic, from the definition of a regular singular point. This means that \(P(t)\) and \(Q(t)\) can be written in the form
\begin{align*} \amp P(t) = \frac{p_{-1}}{t} + \sum_{n=0}^\infty p_n t^n \amp \amp Q(t) = \frac{q_{-2}}{t^2} + \frac{q_{-1}}{t} + \sum_{n=0}^\infty q_n t^n\text{.} \end{align*}
I will call these expressions the Laurent forms for \(P\) and \(Q\text{.}\) The coefficients \(p_1\) and \(q_2\) will be useful a bit later, so here are two useful identities to calculate \(p_1\) and \(q_2\text{.}\)
\begin{align*} \amp p_{-1} = \lim_{t \rightarrow 0 } t P(t) \amp \amp q_{-2} = \lim_{t \rightarrow 0 } t^2 Q(t) \end{align*}
Now I start the method of Frobenious by calculating the derivatives of \(y\) in this new series form.
\begin{align*} y \amp = t^r \sum_{n=0}^\infty c_n t^n = \sum_{n=0}^\infty c_n t^{n+r}\\ y^\prime \amp = \sum_{n=0}^\infty c_n (n+r) t^{n+r-1}\\ y^{\prime \prime} \amp = \sum_{n=0}^\infty c_n (n+r) (n+r-1) t^{n+r-2} \end{align*}
(Notice that I don’t necessarily lose terms when taking derivatives; if \(r\) is not an integer, there are no constant terms in the series which go to zero under differentiation. If \(r\) is an integer, I should make a note to worry about derivatives setting constant terms to zero.)
With these expressions for \(y\text{,}\) \(P\) and \(Q\text{,}\) I put it all together into the original DE.
\begin{align*} \sum_{n=0}^\infty c_n (n+r) (n+r-1) t^{n+r-2} + P(t) \sum_{n=0}^\infty c_n (n+r) t^{n+r-1} \amp \\ + Q(t) \sum_{n=0}^\infty c_n t^{n+r} \amp = 0\\ \sum_{n=0}^\infty c_n (n+r) (n+r-1) t^{n+r-2} \amp \\ + \left(\frac{p_{-1}}{t} + \sum_{n=0}^\infty p_n t^n \right) \sum_{n=0}^\infty c_n (n+r) t^{n+r-1} \amp\\ + \left( \frac{q_{-2}}{t^2} + \frac{q_{-1}}{t} + \sum_{n=0}^\infty q_n t^n \right) \sum_{n=0}^\infty c_n t^{n+r} \amp = 0 \end{align*}
This is quite a mess: I have \(r\) to determine as well as the series coefficients. However, I can focus on the coefficient of the leading term (\(t^{r-2}\)).
\begin{align*} r(r-1) c_0 t^{r-2} + \frac{p_{-1}}{t} r c_0 t^{r-1} + \frac{q_{-2}}{t^2} c_0 t^r \amp = 0\\ (r(r-1) + p_{-1} r + q_{-2} c_0 t^{r-2} \amp = 0\\ r(r-1) + p_{-1}r + q_{-2} \amp = 0 \end{align*}
The division in the last steps relies on the assumption that \(c_0 \neq 0\text{.}\) This assumption is necessary for the Method of Frobenius to work.

Definition 4.5.7.

The quadratic
\begin{equation*} r(r-1) + p_{-1}r + q_{-2} = 0 \end{equation*}
is called the indicial equation.
So, before I proceed to find the recurrence relations and the series coefficients, I use this equation to determine \(r\text{.}\) After finding \(r\text{,}\) the method looks very similar to solutions at ordinary points. If there are two real roots of the indicial equation, is seems I’ll have to repeat the same process for each root. However, in practice, I can leave \(r\) undetermined and do the process once, only inserting a value for \(r\) quite late in the process.

Subsection 4.5.3 Examples

Example 4.5.8.

\begin{align*} 3ty^{\prime \prime} + y^\prime - y \amp = 0 \end{align*}
Let me setup and label the pieces I will need.
\begin{align*} P \amp = \frac{1}{3t}\\ Q \amp = \frac{-1}{3t}\\ tP \amp = \frac{1}{3} \implies p_{-1} = \frac{1}{3}\\ t^2 Q \amp = \frac{-t}{3} \implies q_{-2} = 0\\ r(r-1) + \frac{r}{3} \amp = 0\\ 3r^2 - 2r = 0 \implies r \amp = 0 \text{ or } r = \frac{2}{3} \end{align*}
I’ll deal with the \(r=0\) case first. (Note that \(r=0\) means I get a conventional Taylor series solution.) I go through the conventional steps: taking the derivatives of the Taylor series, inserting those derivatives into the equation, taking the powers of \(t\) into the sums, shifting to adjust the exponents, pulling out coefficients to make the indices match, combining the sums, and then grouping everything to find the recurrence relation.
\begin{align*} 3t \sum_{n=2}^\infty c_n (n) (n-1) t^{n-2} + \sum_{n=1}^\infty c_n (n) t^{n-1} - \sum_{n=0}^\infty c_n t^{n} \amp = 0\\ \sum_{n=2}^\infty 3c_n (n) (n-1) t^{n-1} + \sum_{n=1}^\infty c_n (n) t^{n-1} - \sum_{n=0}^\infty c_n t^{n} \amp = 0\\ \sum_{n=1}^\infty 3c_{n+1} (n+1) n t^n + \sum_{n=0}^\infty c_{n+1} (n+1) t^n - \sum_{n=0}^\infty c_n t^{n} \amp = 0\\ c_1 - c_0 + \sum_{n=1}^\infty \left[ 3(n+1)n c_{n+1} + (n+1) c_{n+1} - c_n \right] t^n \amp = 0\\ c_{n+1} = \frac{c_n}{3(n^2+n) + n+1} = \frac{c_n}{3n^2 +4n +1} \amp = \frac{c_n}{(3n+1)(n+1)} \end{align*}
I use the recurrence relation to start calculating terms. I use the constant term \((c_1 - c_0)\) to determine \(c_1\) and apply the recurrence relation to all terms later. This is ony a degree 1 recurrence relation, so I only need one unknown (\(c_0\)) to start.
\begin{align*} c_0 \amp = c_0\\ c_1 \amp = c_0\\ c_2 \amp = \frac{c_1}{(4)(2)} = \frac{c_0}{(4)(2)}\\ c_3 \amp = \frac{c_2}{(7)(3)} = \frac{c_0}{(7)(4)(3)(2)}\\ c_4 \amp = \frac{c_3}{(10)(4)} = \frac{c_0}{(10)(7)(4)(4)(3)(2)}\\ c_5 \amp = \frac{c_4}{(13)(5)} = \frac{c_0}{(13)(10)(7)(4)(5)(4)(3)(2)} \end{align*}
Now I intuit a general pattern and put that pattern into the Taylor series form.
\begin{align*} c_n \amp = \frac{c_0}{n! (4)(7)(10) \ldots (3n-2)}\\ y_1 \amp = 1 + \sum_{n=1}^\infty \frac{t^n}{n! (4)(7)(10) \ldots (3n-2)} \end{align*}
This is the solution for \(r=0\text{.}\) Now we proceed to the \(r=\frac{2}{3}\) case. This is a non-integer exponent, so the solution is not a conventional Taylor series. However, the steps are nearly the same: I take the derivatives and put them into the equation.
\begin{align*} 3t \sum_{n=0}^\infty c_n \left(n + \frac{2}{3} \right) \left(n + \frac{2}{3}-1 \right) t^{n + \frac{2}{3}-2} \amp \\ + \sum_{n=0}^\infty c_n \left(n + \frac{2}{3} \right) t^{n + \frac{2}{3}-1} - \sum_{n=0}^\infty c_n t^{n + \frac{2}{3}} \amp = 0 \end{align*}
At this point, I’m going to factor \(t^{\frac{2}{3}}\) out of all the term. This will leave me with just integer exponents inside, which I can deal with in the same method as Taylor series solutions. This kind of manipulation, to factor our a common exponent, will always be available for solutions when \(r\) is not an integer. Now I proceed with the same Taylor series steps: shift to adjust the exponents, pull out terms to match the indices, combine sums, and look for the recurrence relation.
\begin{align*} t^{\frac{2}{3}} \left[ \sum_{n=0}^\infty 3c_n \left(n+ \frac{2}{3} \right) \left(n - \frac{1}{3} \right) t^{n-1} + \sum_{n=0}^\infty c_n \left(n+ \frac{2}{3} \right) t^{n-1} - \sum_{n=0}^\infty c_n t^{n} \right] \amp = 0\\ t^{\frac{2}{3}} \left[ \sum_{n=0}^\infty 3c_n \left(n + \frac{2}{3} \right) \left( n - \frac{1}{3} \right) t^{n -1} + \sum_{n=0}^\infty c_n \left(n + \frac{2}{3} \right) t^{n-1} - \sum_{n=1}^\infty c_{n-1} t^{n-1} \right] \amp = 0\\ t^{\frac{2}{3}} \left[ \left(3\frac{2}{3} \frac{-1}{3} + \frac{2}{3} \right) c_0 t^{-1} \right. \amp \\ \left. + \sum_{n=1}^\infty \left[ 3 c_n \left( n+ \frac{2}{3} \right) \left( n - \frac{1}{3} \right) + c_n \left( n + \frac{2}{3} \right) - c_{n-1} \right] t^{n-1} \right] \amp = 0\\ \left[ \left( n + \frac{2}{3} \right) \left( 3n - 1 + 1 \right) \right] c_n - c_{n-1} \amp = 0\\ c_n = \frac{c_{n-1}}{ \left( n + \frac{2}{3} \right) (3n)} = \frac{c_{n-1}}{(3n+2)n} \amp\\ c_{n+1} = \frac{c_n}{(3n+5)(n+1)} \amp \end{align*}
The term in front of \(c_0\) reduces to \(0\text{,}\) so \(c_0\) is free. After the first term, there is a degree 1 recurrence relation, so I start calculating terms.
\begin{align*} c_0 \amp = c_0\\ c_1 \amp = \frac{c_0}{5}\\ c_2 \amp = \frac{c_1}{(8)(2)} = \frac{c_0}{(8)(5)(2)}\\ c_3 \amp = \frac{c_2}{(11)(3)} = \frac{c_0}{(5)(8)(11)(2)(3)}\\ c_4 \amp = \frac{c_3}{(14)(4)} = \frac{c_0}{(5)(8)(11)(14)(2)(3)(4)}\\ c_5 \amp = \frac{c_4}{(17)(5)} = \frac{c_0}{(5)(8)(11)(14)(17)(2)(3)(4)(5)} \end{align*}
I intuit the pattern and then insert it in the series.
\begin{align*} c_n \amp = \frac{c_0}{n! (5)(8)(11) \ldots (3n+2)}\\ y_2 \amp = 1 + \sum_{n=1}^\infty \frac{t^{n + \frac{2}{3}}}{n! (5)(8) \ldots (3n+2)} \end{align*}
This is the second solution. The general solution is any linear combination of the two solutions.
\begin{align*} y \amp = A \left( 1 + \sum_{n=1}^\infty \frac{t^n}{n! (4)(7)(10) \ldots (3n-2)} \right) + B \left( 1 + \sum_{n=1}^\infty \frac{t^{n + \frac{2}{3}}}{n! (5)(8) \ldots (3n+2)} \right) \end{align*}
The radius of convergence here is \(R = \infty\text{,}\) which can be calculated by ratio test. It is good to note, though, that there are no guarantees about the radius of convergence with the method of Frobenius.
In this example, I could have gone as far as the recurrence relation using an arbitrary \(r\) to avoid repetition. I do need to specify \(r\) once I get to the recurrence relation. However, when \(r\) is an integer, I will often have to return to the original differential equation and repeat the calculation for each \(r\text{,}\) in order to keep track of which terms are sent to zero in the series due to taking derivatives.

Example 4.5.9.

\begin{equation*} ty^{\prime \prime} + y = 0 \end{equation*}
Here \(P =0\) and \(Q=\frac{1}{t}\text{,}\) so \(p_{-1} = 0\) and \(q_{-2} = 0\text{.}\) That means the indicial equation is \(r(r-1) = 0\) with roots \(r=0\) and \(r=1\text{,}\) so I expect two Taylor series solutions. I start with the case \(r=0\text{.}\) I’m going more quickly through the steps of the process now, without repeating all the commentary.
\begin{align*} t \sum_{n=2}^\infty n(n-1)c_n t^{n-2} + \sum_{n=0}^\infty c_n t^n \amp = 0\\ \sum_{n=2}^\infty n(n-1)c_n t^{n-1} + \sum_{n=0}^\infty c_n t^n \amp = 0\\ \sum_{n=1}^\infty (n+1)nc_n t^n + \sum_{n=0}^\infty c_n t^n \amp = 0\\ c_0 + \sum_{n=1}^\infty \left[ (n+1)nc_n + c_n \right] t^n \amp = 0\\ c_{n+1} \amp = \frac{-c_n}{(n+1)(n)}\\ c_0 \amp = 0\\ c_1 \amp = c_1\\ c_2 \amp = \frac{-c_1}{2}\\ c_3 \amp = \frac{-c_2}{(3)(2)} = \frac{c_1}{(3)(2)}\\ c_4 \amp = \frac{-c_3}{(4)(3)} = \frac{-c_1}{(4)(3)(3)(2)(2)}\\ c_5 \amp = \frac{-c_4}{(5)(4)} = \frac{c_1}{(5)(4)(4)(3)(3)(2)(2)}\\ c_n \amp = \frac{(-1)^{n+1} c_1}{n! (n-1)!}\\ y_1 \amp = \sum_{n=1}^\infty \frac{(-1)^{n+1} t^n}{n!(n-1)!} \end{align*}
Then I calculate with \(r=1\text{.}\) Note the bounds in this case: since the series doesn’t have a constant term, I only loose one term in the second derivative and the index only shifts by one.
\begin{align*} t \sum_{n=1}^\infty (n+1)nc_n t^{n-1} + \sum_{n=0}^\infty c_n t^{n+1} \amp = 0\\ \sum_{n=1}^\infty (n+1)nc_n t^n + \sum_{n=0}^\infty c_n t^{n+1} \amp = 0\\ \sum_{n=1}^\infty (n+1)nc_n t^n + \sum_{n=1}^\infty c_{n-1} t^n+ \amp = 0\\ (n+1)n c_n + c_{n-1} \amp = 0\\ c_n \amp = \frac{-c_{n-1}}{(n+1)(n)}\\ c_{n+1} \amp = \frac{-c_n}{(n+2)(n+1)}\\ c_0 \amp = c_0\\ c_1 \amp = \frac{-c_0}{2}\\ c_2 \amp = \frac{-c_1}{(3)(2)} = \frac{c_0}{(3)(2)}\\ c_3 \amp = \frac{-c_2}{(4)(3)} = \frac{-c_0}{(4)(3)(3)(2)(2)}\\ c_4 \amp = \frac{-c_3}{(5)(4)} = \frac{c_0}{(5)(4)(4)(3)(3)(2)(2)}\\ c_n \amp = \frac{(-1)^{n} c_0}{(n+1)! n!}\\ y_1 \amp = \sum_{n=0}^\infty \frac{(-1)^n t^{n+1}}{(n+1)!n!}\\ y_1 \amp = \sum_{n=1}^\infty \frac{(-1)^{n+1} t^n}{n!(n-1)!} \end{align*}
Very curiously, I get the same series! The two roots don’t lead to two independent series, but to the same series. I would need other information to get the second solutions. In general, finding another solution can be quite difficult. With the method of Frobenius, I am not guaranteed to find both solutions, and this situation where both roots lead to the same series can happen.

Subsection 4.5.4 Multiple Solutions in the Method of Frobenius

There is a theorem which deals with the situation in the previous example, where both roots gave the same series.
The proof of this theorem relies on another differential equation technique called reduction of order. I’ve chosen not to cover that technique in this course, but it is good to be aware that it exists. In general, if there is one solution to a second order linear DE, then reduction of order is a method for using that solution to change the second order DE into a first order DE, which can then be solved by first order methods. Let me just state the theorem for you; this can be used to deal with the third case in the Method of Frobenius to find the second linearly independent solution.