Skip to main content

Section 4.2 Ordinary Points

Subsection 4.2.1 Existence of Solutions

Consider a homogeneous linear second order differential equation.
\begin{equation*} y^{\prime \prime} + P(t) y^\prime + Q(t) y= 0 \end{equation*}
In a departure from the Chapter 3, I will now allow \(P\) and \(Q\) to be functions instead of just constants. Before I jump into the method of series solutions, I need some theory about the existence of solutions.

Definition 4.2.1.

A point \(\alpha\) is called an ordinary point for the differential equation if \(P\) and \(Q\) are both analytic at \(\alpha\text{.}\) Otherwise it is called a singular point.
This is a lovley theorem, but these is one catch. The ‘distance’ mentioned to the nearest singular point is actually distance in \(\CC\) to a possibly-complex singular point. In practice, this isn’t too much of a worry, but I should be careful with my assumptions about the radii of convergence.

Subsection 4.2.2 The Method for Solutions at Ordinary Points

I could describe a general method, but it would be a lengthy and abstract description. Keeping everything very general, it would be easy to lose track of the pieces. Instead, I’ll use examples to demonstrate the method.

Example 4.2.3.

I’ll start with a simple and known DE, to see how the method works.
\begin{equation*} y^{\prime \prime} + y = 0 \end{equation*}
Both \(P\) and \(Q\) are constant, so they analytic everywhere, even in \(\CC\text{.}\) Since there are no singular points, I expect solutions which have Taylor series defined for all of \(\RR\text{.}\) I can centre them at \(0\) for convenience. I assume the solution has a Taylor series.
\begin{equation*} y = \sum_{n=0}^\infty c_n t^n \end{equation*}
I put this into the DE and calcluate, keeping in mind the tools to manipulate the inidices of series.
\begin{align*} y \amp = \sum_{n=0}^\infty c_n t^n\\ y^\prime \amp = \sum_{n=1}^\infty c_n nt^{n-1}\\ y^{\prime\prime} \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2}\\ y^{\prime \prime} + y \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2} + \sum_{n=0}^\infty c_n t^n = 0 \end{align*}
Now the DE has become an equation of infinite series. To solve this equation, I would like to express is as a single series. To do that, I usually will need to manipulate the indices. In this case, I shift the first series by 2 and then combine.
\begin{align*} y^{\prime \prime} + y \amp = \sum_{n=0}^\infty c_{n+2}(n+2)(n+1)t^n + \sum_{n=0}^\infty c_n t^n = 0\\ \amp = \sum_{n=0}^\infty \left[ c_{n+2}(n+2)(n+1) + c_n \right] t^n = 0 \end{align*}
Now I rewrite this last equation in a slightly different form.
\begin{equation*} \sum_{n=0}^\infty \left[ c_{n+2}(n+2)(n+1) + c_n \right] t^n = \sum_{n=0}^\infty 0 t^n \end{equation*}
I’ve just written the constant function \(0\) as a series. Now, two series are only equal is every coefficient is the same. Therefore, I can take this equation and turn it into equations of coefficients. There are infinitely many such equations (one for each natural number), but that’s alright, since they have a pattern. For every \(n \in \NN\text{,}\) there is a recurrence relation.
\begin{equation*} (n+1)(n+2)c_{n+2} + c_n = 0 \implies c_{n+2} = \frac{-c_n}{(n+1)(n+2)} \end{equation*}
To find the coefficients of the series that solves this DE, I have to solve this recurrence relation. I’ve essentially translated the problem: what was a differential equation problem is now a recurrence relation problem. I hope that solving the recurrence relation is easier than the original DE. There are various techniques in other parts of mathematics for solving recurrence relations. However, I am not assuming familiar with those techniques and, in this course, I will take a naive approach and try to solve them by inspection.
Before solving the recurrence relation, I need two starting terms, \(c_0\) and \(c_1\text{.}\) This works out very well, since \(c_0 = y(0)\) and \(c_1 = y^\prime(0)\text{.}\) The initial seed terms for the recurrence relation are exactly initial conditions for the DE. I can either leave them as variables \(a\) and \(b\text{,}\) or set them to specific values.
Now I set \(c_0 = 1\) and \(c_1 = 0\text{,}\) which are the initial conditions given in the problem. (If I am not given initial conditions, I can leave these two as unknowns and carry them all they way through the process). Then I use the recurrence relation to calculate some terms. The general process of solving by inspection involves calcaulting the first few terms and trying to simply intuit a pattern in the term.
\begin{align*} c_0 \amp = 1\\ c_1 \amp = 0\\ c_2 \amp = \frac{-c_0}{(1)(2)} = \frac{-1}{2}\\ c_3 \amp = \frac{-c_1}{(2)(3)} = 0\\ c_4 \amp = \frac{-c_2}{(3)(4)} = \frac{1}{2\cdot 3 \cdot 4}\\ c_5 \amp = \frac{-c_3}{(4)(5)} = 0\\ c_6 \amp = \frac{-c_4}{(5)(6)} = \frac{-1}{2\cdot 3 \cdot 4 \cdot 5 \cdot 6 }\\ c_7 \amp = \frac{-c_5}{(6)(7)} = 0\\ c_8 \amp = \frac{-c_6}{(7)(8)} = \frac{1}{8!}\\ c_9 \amp = \frac{-c_7}{(8)(9)} = 0 \end{align*}
Having calculated a number of terms, a pattern is becoming clear.
\begin{align*} c_{2n} \amp = \frac{(-1)^n}{(2n)!}\\ c_{2n+1} \amp = 0 \end{align*}
Only the even terms are non zero. If a series has different behaviour between its even and its odd terms, I use this trick of writing \((2n)\) to express the even terms and \((2n+1)\) to express the odd terms. Since the odd terms vanish, I can write the resulting series only involving even term.
\begin{equation*} y = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} t^{2n} \end{equation*}
This is exactly the Taylor series for cosine, which makes perfect sense, since cosine solves the original DE and matches the initial conditions I imposed.

Example 4.2.4.

This is the first example which has non-constant coefficients, since \(Q = t\text{.}\)
\begin{equation*} y^{\prime \prime} + ty = 0 \end{equation*}
\(P=0\) and \(Q=t\text{,}\) both of which are analytic everywhere, so I expect a series solution with infinite radius of convergence. I repeat the calculation from the previous example.
\begin{align*} y \amp = \sum_{n=0}^\infty c_n t^n\\ y^\prime \amp = \sum_{n=1}^\infty c_n nt^{n-1}\\ y^{\prime\prime} \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2}\\ y^{\prime \prime} + ty \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2} + t\sum_{n=0}^\infty c_n t^n\\ y^{\prime \prime} + ty \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2} + \sum_{n=0}^\infty c_n t^{n+1} \end{align*}
I’ve brought the \(t\) into the second series, which I have to do before I have any hope of combining them. Here, I could try to shift indices, but that wouldn’t help yet since the exponent are different. I have to make the exponents match first. I’ll shift both series so that the exponents are both \(t^n\text{.}\) This is shifting the index of the first series down by two and the index of the second series up by one.
\begin{align*} \amp = \sum_{n=0}^\infty c_{n+2} (n+2)(n+1) t^n + \sum_{n=1}^\infty c_{n-1} t^n \end{align*}
Now the exponents are the same, but the starting indices are not. Shifting would mess up the exponents. So, instead, I take out terms. I’ll take out the \(0\)th term from the first series so that they both start at 1. After that, I can finally combine the series.
\begin{align*} \amp = 2c_2 + \sum_{n=1}^\infty c_{n+2} (n+2)(n+1) t^n + \sum_{n=1}^\infty c_{n-1} t^n \end{align*}
\begin{align*} \amp = 2c_2 + \sum_{n=1}^\infty \left[ c_{n+2} (n+2)(n+1) + c_{n-1} \right] t^n = 0 \end{align*}
Since the right side is zero, all these coefficients to be zero. The constnat coefficient on the left is just \(2c_2=0\text{,}\) which implies \(c_2 = 0\text{.}\) For the rest, \(c_0\) and \(c_1\) are unknown and I can use the series to generate a recurrence relation.
\begin{align*} (n+2)(n+1)c_{n+2} + c_{n-1} \amp = 0\\ c_{n+2} \amp = \frac{-c_{n-1}}{(n+2)(n+1)}\\ c_{n+3} \amp = \frac{-c_n}{(n+3)(n+2)} \end{align*}
I shifted the recurrent relation in the last step, to make it a bit easier to read. This is a third order recurrence relation, which might imply that three starting values are needed. However, there are still only two parameters. This isn’t a problem, since I already calculated that \(c_2=0\text{,}\) which is essentially the third starting value. To try to see a pattern in the recurrence relation, I calculate some coefficients.
\begin{align*} c_0 \amp = c_0\\ c_1 \amp = c_1\\ c_2 \amp = 0\\ c_3 \amp = \frac{-c_0}{(3)(2)}\\ c_4 \amp = \frac{-c_1}{(4)(3)}\\ c_5 \amp = \frac{-c_2}{(5)(4)} = 0\\ c_6 \amp = \frac{-c_3}{(6)(5)} = \frac{c_0}{(6)(5)(3)(2)}\\ c_7 \amp = \frac{-c_4}{(7)(6)} = \frac{c_1}{(7)(6)(4)(3)}\\ c_8 \amp = \frac{-c_5}{(8)(7)} = 0\\ c_9 \amp = \frac{-c_6}{(9)(8)} = \frac{-c_0}{(9)(8)(6)(5)(3)(2)}\\ c_{10} \amp = \frac{-c_7}{(10)(9)} = \frac{-c_1}{(10)(9)(7)(6)(4)(3)}\\ c_{11} \amp = \frac{-c_8}{(11)(10)} = 0 \end{align*}
I see that there are three groups of terms. Like before, where I used an indexing trick to refer to even and odd terms, I can use a similar indexing trick to refer to these three groups of terms.
  • Terms of the form \(c_{3n+2}\) are all zero, since they all relate back to \(c_2\text{.}\)
  • Terms of the form \(c_{3n}\) all involve \(c_0\text{.}\)
  • Terms of the form \(c_{3n+1}\) all involve \(c_1\text{.}\)
Expressing the coefficients in closed form is more difficult than before, but I still can intuite the general form. I use some factorial tricks to express the coefficients: the denominators are missing every third term, so if I mulitply numerator and denominator by those terms, I can write the denominator as a factorial.
\begin{align*} c_{3n} \amp = \frac{(-1)^n (1)(4)(7)\ldots(3n-2)}{(3n)!} c_0\\ c_{3n+1} \amp = \frac{(-1)^n (2)(5)(8)\ldots(3n-1)}{(3n+1)!} c_1\\ c_{3n+2} \amp = 0 \end{align*}
Then I group the \(c_1\) terms into a series and the \(c_2\) terms into a series, to get a general solution.
\begin{align*} y = \amp c_0 \left[ 1 + \sum_{n=1}^\infty \frac{(-1)^n (1)(4)(7)\ldots(3n-2)}{(3n)!} t^{3n} \right] \\ \amp + c_1 \left[ t + \sum_{n=1}^\infty \frac{(-1)^n (2)(5)(8)\ldots(n+2)}{(3n-1)!} t^{3n+1} \right] \end{align*}
If I need to, I can easily check that each of these series has infinite radius of convergence.
I might wonder: what are these functions? These are two new, strange and unfamiliar functions. Unless I have the good fortune (as in the previous example) to recognize the resulting Taylor series, I simply treat the solutions as new functions. However, I still consider the DE solved, since Taylor series can be used to define new functions. I can know a great deal about a function based on its Taylor series, so this is a sufficient threshold of information to consider the DE solved.

Example 4.2.5.

\begin{align*} (t^2+1)y^{\prime \prime} + ty^\prime - y \amp = 0 \end{align*}
Before determining \(P(t)\) and \(Q(t)\text{,}\) I need to clear the coefficient of \(y^{\prime \prime}\text{.}\) This is important, since the definition of \(P(t)\) and \(Q()t\) (and hence the location of ordinary points) depends on the form where the coefficient of the second derivative is constant.
\begin{align*} y^{\prime \prime} + \left( \frac{t}{t^2+1} \right) y^{\prime} - \left( \frac{1}{t^2+1} \right) y \amp = 0\\ P(t) \amp = \frac{t}{t^2+1}\\ Q(t) \amp = \frac{1}{t^2+1} \end{align*}
\(0\) is an ordinary point, but what is the distance to the nearest singular point? Here, I need to remember to look for singularities in \(\CC\text{.}\) The denominators are undefined at \(\pm \imath\text{,}\) which is 1 unit away from the origin. Therefore, centered at \(0\text{,}\) I expect a radius of \(R=1\text{.}\)
(As an aside, note that I can centre the series wherever we wish; I default to \(0\) simply because it is convenient and familiar. Each choice of a center point gives a different series solution, with a different radius of convergence. At \(t=1\text{,}\) I would have \(R= \sqrt{2}\) (the distance to \(\imath\) in \(\CC\)). At \(t=4\text{,}\) I would have \(R = \sqrt{17}\text{.}\))
This series solutions starts with a lengthly calculation.
\begin{align*} y \amp = \sum_{n=0}^\infty c_n t^n\\ y^\prime \amp = \sum_{n=1}^\infty c_n nt^{n-1}\\ y^{\prime\prime} \amp = \sum_{n=2}^\infty c_n n(n-1)t^{n-2}\\ \amp (t^2+1)y^{\prime \prime} + ty^\prime - y \\ \amp = (t^2+1) \sum_{n=2}^\infty c_n n(n-1)t^{n-2} + t \sum_{n=1}^\infty c_n nt^{n-1} - \sum_{n=0}^\infty c_n t^n = 0 \end{align*}
First, I need to distribute the \((t^2+1)\text{,}\) which creates two series. At the same time, I take the powers of \(t\) into the series where they exists.
\begin{align*} 0 \amp = \sum_{n=2}^\infty n (n-1) c_n t^n + \sum_{n=2}^\infty n (n-1) c_n t^{n-2}\\ \amp + \sum_{n=1}^\infty n c_n t^n - \sum_{n=0}^\infty c_n t^n \end{align*}
Then I need to shift to make all the exponent \(t^n\)
\begin{align*} \amp = \sum_{n=2}^\infty n (n-1) c_n t^n + \sum_{n=0}^\infty (n+2) (n+1) c_{n+2} t^n\\ \amp + \sum_{n=1}^\infty n c_n t^n - \sum_{n=0}^\infty c_n t^n \end{align*}
The highest starting index is 2. Therefore, I’ll pull out starting terms from any series that starts with an index below 2.
\begin{align*} \amp = \sum_{n=2}^\infty n (n-1) c_n t^n + 2c_2 + 6c_3t + \sum_{n=2}^\infty (n+2) (n+1) c_{n+2} t^n\\ \amp + c_1 t + \sum_{n=2}^\infty n c_n t^n - c_0 - c_1 t - \sum_{n=2}^\infty c_n t^n \end{align*}
Finally, I can combine the four sums into one. I’ll group all the other terms together as a constant and as a coefficient of \(t\text{.}\)
\begin{align*} \amp = (2c_2 - c_0) + (6c_3 + c_1 - c_1) t\\ \amp + \sum_{n=2}^\infty \left[ n (n-1) c_n + (n+2) (n+1) c_{n+2} + n c_n - c_n \right] t^n = 0 \end{align*}
I proceed to find the recurrence relation by setting all the coefficients to \(0\text{.}\) I leave \(c_0\) and \(c_1\) as unknowns. I’ll use some factoring and cancelling to make the recurrence relation more reasonable.
\begin{align*} 0 \amp = n (n-1) c_n + (n+2) (n+1) c_{n+2} + n c_n - c_n \\ (n+1) (n+2) c_{n+2} \amp = (1 - n - n(n-1)) c_n\\ c_{n+2} \amp = \frac{-c_n (n^2-1)}{(n+2)(n+1)} = \frac{-c_n (n+1)(n-1)}{(n+2)(n+1)}\\ \amp = \frac{ -c_n (n-1)}{n+2} \end{align*}
This is a second order recurrence relation. Here are the first few terms. (I must be careful to use the isolated expressions to find \(c_2\) and \(c_3\text{,}\) and then the standard recurrence relation for \(c_n\) where \(n \geq 4\)).
\begin{align*} c_0 \amp = c_0\\ c_1 \amp = c_1\\ c_2 \amp = \frac{c_0}{2}\\ c_3 \amp = 0\\ c_4 \amp = \frac{-c_2(1)}{4} = \frac{c_0}{(2)(4)}\\ c_5 \amp = 0\\ c_6 \amp = \frac{-c_4(3)}{6} = \frac{-c_0(3)}{(6)(4)(2)}\\ c_7 \amp = 0\\ c_8 \amp = \frac{-c_6(5)}{8} = \frac{c_0(5)(3)}{(8)(6)(4)(2)}\\ c_9 \amp = 0\\ c_{10} \amp = \frac{-c_8(7)}{10} = \frac{-c_0(7)(5)(3)}{(10)(8)(6)(4)(2)}\\ c_{2n+1} \amp = 0\\ c_{2n} \amp = \frac{(-1)^n (3)(5)(7)\ldots (2n-3)}{(2)(4)(6)(8) \ldots (2n)} c_0 = \frac{(-1)^n (2n-3)!}{(2)(4)(6) \ldots (2n-4) 2^n n!} c_0\\ \amp = \frac{((-1)^n (2n-3)!}{2^{n-2} (n-2)! 2^n n!} c_0 = \frac{((-1)^n (2n-3)!}{2^{2n-2} n! (n-2)!} c_0 \end{align*}
I have two cases. The odd terms all vanish, so the only place where I see \(c_1\) is the isolated term \(c_1 t\text{.}\) That implies that the linear polynomial \(c_1 t\) is a solution for any \(c_1\text{.}\) For the \(c_0\) terms, there is still a whole series. Putting this together gives the general solution. Note that I have to start the series at \(n=2\text{,}\) since the pattern doesn’t hold before that point.
\begin{equation*} y = c_0 \left[1 + \frac{t^2}{2} + \sum_{n=2}^\infty \frac{((-1)^n (2n-3)!}{2^{2n-2} n! (n-2)!} \right] + c_1 t \end{equation*}
If I look at the radius of convergence (due to the factorials in numerator and denominators) I can calculate the expected value of \(R=1\text{.}\) Note that the second term, the polynomial \(c_1 t\) doesn’t have a domain restriction. It’s possible to find solutions that exceed the limitation of the radius of convergence; Theorem 4.2.2 guaranteed a solution with a minimum radius of convergence but did not stipulate a maximum.