Skip to main content

Section 4.6 Bessel Functions

Subsection 4.6.1 The \(\Gamma\) Function

Before we get to a very interesting example of the method of Frobenious, we need to define the \(\Gamma\) function. (That symbol is an upper case greek letter gamma, so read this name as the ‘gamma function’. This is a ubiquitous and useful function which generalizes the factorial.

Definition 4.6.1.

The \(\Gamma\) function is defined by this integral.
\begin{equation*} \Gamma(t) = \int_0^\infty x^{t-1}e^{-x} dx \end{equation*}
The \(\Gamma\) function is defined on all \(\RR\) except 0 and negative integers. It is continuous and differentiable on \((0, \infty)\text{.}\)
There are several important properties of \(\Gamma(t)\text{:}\) First, (as mentioned) it generalizes the factorial. The third line below follows from the first two, which are provable properties from the integral definition.
\begin{align*} \Gamma(a+1) \amp = a \Gamma(a)\\ \Gamma(1) \amp = 1\\ \Gamma(n) \amp = (n-1)! \text{ for } n \in \NN \end{align*}
Asymptotically, the factorial nature of the \(\Gamma\) functions means it grows faster than \(e^t\text{.}\)
If the positive integer values of the \(\Gamma\) function generalize the factorial, what can be expected for other values? The results are surprising. Look at the value of \(\Gamma(\frac{1}{2})\) (in this calculation, I use a well-known result for the integral \(e^{-u^2}\)).
\begin{align*} \Gamma \left( \frac{1}{2} \right) \amp = \int_0^\infty x^{-\frac{1}{2}} e^{-x} dx\\ x \amp = u^2\\ dx \amp = 2u du\\ x^{-\frac{1}{2}} \amp = \frac{1}{u}\\ \amp = 2 \int_0^\infty e^{-u^2} u \frac{1}{u} du\\ \amp = 2 \int_0^\infty e^{-u^2}\\ \amp = 2 \frac{\sqrt{\pi}}{2} = \sqrt{\pi} \end{align*}
Then, from the factorial-like nature of \(\Gamma\text{,}\) all other half-integer values are multiples of \(\sqrt{\pi}\text{.}\)
\begin{equation*} \Gamma\left( \frac{2n+1}{2} \right) = \frac{ (3)(5)(7) \ldots (2n-1) \sqrt{\pi}}{2^n} \end{equation*}

Subsection 4.6.2 Bessel’s Equation

The method of Frobenius is used to solve a historically interesting equation: Bessel’s Equation. This equation shows up in harmonic problems with certain cirucular boundary conditions, such as the vibrations on a drum. It is also useful for springs with fatigue (where the spring constant gets weaker over time), the quantum model of the hydrogen atom, and various electical/gravitational potentials (particular those which are circularly or spherically symmetric). Its solutions are another piece of the mathematics of special functions.
Let \(\nu \in \RR\) (this \(\nu\) is the greek letter nu, not the roman letter ‘v’). Bessel’s equation of order \(\nu\) is the equation
\begin{equation*} t^2 y^{\prime \prime} + t y^\prime + (t^2 - \nu^2) y = 0\text{.} \end{equation*}
Written in standard form, \(0\) is a regular singular point of Bessel’s equation.
\begin{align*} P(t) \amp = \frac{1}{t} \implies p_{-1} = 1\\ Q(t) \amp = 1-\frac{\nu^2}{t^2} \implies q_{-2} = -\nu^2 \end{align*}
I write and solve the indicial equation.
\begin{align*} r(r-1) + r - \nu^2 \amp = 0\\ r^2 -r + r - \nu^2 \amp = 0\\ r^2 \amp = \nu^2\\ r \amp = \pm \nu \end{align*}
The roots are a positive and negative pair. I’ll start with \(r = \nu\) first and assume that \(\nu \geq 0\) without loss of generality. I calculate the derivatives of \(y\text{.}\)
\begin{align*} y \amp = \sum_{n=0}^\infty c_n t^{n+\nu}\\ y^\prime \amp = \sum_{n=0}^\infty c_n(n+\nu) t^{n+\nu- 1}\\ y^{\prime\prime} \amp = \sum_{n=0}^\infty c_n(n+\nu)(n+\nu-1) t^{n+\nu- 2} \end{align*}
Then I proceed with a lengthly calculation where I pull the derivatives into the equation, distribute the \((t^2 - \nu^2)\) term, pull the powers of \(t\) into the sums, shift the sums to make the exponents match, pull out initial terms to make the starting indices match, and finally combine the sums into one sum. It’s a bit of a mess, but it is the same steps that I’ve been using for a few sections.
\begin{align*} \amp t^2 y^{\prime \prime} + t y^\prime + (t^2 - \nu^2) y = 0\\ \amp t^2 \sum_{n=0}^\infty c_n(n+\nu)(n+\nu-1)t^{n + \nu - 2} + t \sum_{n=0}^\infty c_n(n+\nu) t^{n+\nu- 1} \\ \amp + t^2 \sum_{n=0}^\infty c_n t^{n+\nu} - \nu^2 \sum_{n=0}^\infty c_n t^{n+\nu} = 0\\ \amp \sum_{n=0}^\infty c_n(n+\nu)(n+\nu-1) t^{n+\nu} + \sum_{n=0}^\infty c_n(n+\nu) t^{n+\nu} \\ \amp + \sum_{n=0}^\infty c_n t^{n+\nu+2} - \sum_{n=0}^\infty \nu^2 c_n t^{n+\nu} = 0\\ \amp \sum_{n=0}^\infty c_n(n+\nu)(n+\nu-1) t^{n+\nu} + \sum_{n=0}^\infty c_n(n+\nu) t^{n+\nu} \\ \amp + \sum_{n=2}^\infty c_{n-2} t^{n+\nu} - \sum_{n=0}^\infty \nu^2 c_n t^{n+\nu} = 0\\ \amp \nu(\nu-1) c_0 t^\nu + (\nu+1) \nu c_1 t^{\nu+1} + \nu c_0 t^\nu \\ \amp + (\nu+1)c_1t^{\nu+1} - \nu^2 c_0 t^\nu - \nu^2 c_1 t^{\nu+1} \\ \amp + \sum_{n=2}^\infty c_n(n+\nu)(n+\nu-1) t^{n+\nu} + \sum_{n=2}^\infty c_n(n+\nu) t^{n+\nu} \\ \amp + \sum_{n=2}^\infty c_{n-2} t^{n+\nu} - \sum_{n=2}^\infty \nu^2 c_n t^{n+\nu} = 0\\ \amp \nu(\nu-1) c_0 t^\nu + (\nu+1) \nu c_1 t^{\nu+1} + \nu c_0 t^\nu \\ \amp + (\nu+1)c_1t^{\nu+1} - \nu^2 c_0 t^\nu - \nu^2 c_1 t^{\nu+1} \\ \amp + \sum_{n=2}^\infty \left[ c_n(n+\nu)(n+\nu-1) + c_n(n+\nu) + c_{n-2} - \nu^2 c_n \right] t^{n+\nu} = 0 \end{align*}
Look at the coefficient of \(t^{\nu}\text{.}\)
\begin{align*} \left( (\nu^2 - \nu) c_0 +\nu c_0 - \nu^2 c_0 \right) \amp = 0\\ \left( \nu^2 - \nu + \nu - \nu^2 \right) c_0 \amp = 0\\ 0 c_0 \amp = 0 \end{align*}
Therefore, \(c_0\) is free. Look at the coefficient of \(t^{\nu+1}\text{.}\)
\begin{align*} \left( \nu(\nu+1)c_1 + (\nu+1) c_1 - \nu^2 c_1 \right) \amp = 0\\ \left( \nu^2 + \nu + \nu + 1 - \nu^2 \right) c_1 \amp = 0\\ \left( 2 \nu + 1 \right) c_1 \amp =0 \end{align*}
So \(c_1 = 0\) unless \(\nu = \frac{-1}{2}\text{.}\) However, I assumed that \(\nu\) was positive, so I conclude that \(c_1 = 0\text{.}\) (I’ll pay some special attention to \(\nu = \frac{-1}{2}\) when I look at negative \(\nu\text{,}\) and to half-integer values of \(\nu\) in general.)
Then I move on to the general recurrence relation, for \(n \geq 2\text{.}\)
\begin{align*} ((n+\nu)(n+\nu-1) + n + \nu - \nu^2) c_n + c_{n-2} \amp = 0\\ (n^2 + n\nu + n\nu + \nu^2 - n - \nu + n + \nu - \nu^2 ) c_n \amp = -c_{n-2}\\ (n^2 + 2n\nu) c_n \amp = -c_{n-2}\\ c_n \amp = \frac{-c_{n-2}}{n(n+2\nu)}\\ c_{n+2} \amp = \frac{-c_n}{(n+2)(n+2+2\nu)} \end{align*}
These are the equations that calculate coefficients, so I start calculating.
\begin{align*} c_0 \amp = c_0\\ (2\nu+1) c_1 \amp = 0 \implies c_1 = 0 \implies c_{2n+1} = 0 \ \forall n \in \NN\\ c_2 \amp = \frac{-c_0}{2(2+2\nu)}\\ c_4 \amp = \frac{c_0}{(2)(4)(2+2\nu)(4+2\nu)}\\ c_6 \amp = \frac{-c_0}{(2)(4)(6)(2+2\nu)(4+2\nu)(6+2\nu)}\\ c_8 \amp = \frac{c_0}{(2)(4)(6)(8)(2+2\nu)(4+2\nu)(6+2\nu) (8+2\nu)}\\ c_{2n} \amp = \frac{(-1)^n c_0}{2^{2n} n! (1+\nu)(2+\nu) \ldots (n+\nu)} \end{align*}
The constant \(c_0\) is still undetermined. By convention, the choice for \(c_0\) is the following strange value.
\begin{equation*} c_0 = \frac{1}{2^\nu \Gamma(1+\nu)} \end{equation*}
The properties of the \(\Gamma\) function allow me to simplify the denominator of the \(c_n\text{.}\)
\begin{equation*} c_{2n} = \frac{(-1)^n}{2^\nu \Gamma(1+\nu)} \frac{1}{2^{2n}n! (\nu+1) \ldots (\nu+n)} = \frac{(-1)^n}{2^{2n+v} n! \Gamma(1+n+\nu)} \end{equation*}
With this simplification, I can write the final solution for positive \(\nu\text{.}\)

Definition 4.6.2.

The solution constructed above is written \(J_\nu\) and called the Bessel function of the first kind of order \(\nu\text{.}\)
\begin{equation*} J_{\nu} = \sum_{n=0}^\infty \frac{(-1)^n}{n! \Gamma(1+n+\nu)} \left( \frac{t}{2} \right)^{2n+\nu} \end{equation*}
It converges on \((0, \infty)\text{.}\) It may not be defined at 0 or negative numbers. In general, the applications are only interested in the Bessel function on the domain \((0, \infty)\text{.}\) The Bessel functions of the first kind for \(\nu \in \ZZ\) are show in Figure 4.6.3.
Figure 4.6.3. Integer-Order Bessel Functions of the First Kind
Figure 4.6.4. Integer-Order Bessel Functions of the Second Kind
I could redo the same work for negative \(\nu\text{,}\) but it is very similar. For \(\neq \frac{-1}{2}\text{,}\) and I would get an equation of \((2\nu - 1) c_1 = 0\text{,}\) so I would still conclude that \(c_1 = 0\text{,}\) hence all odd coefficients are \(0\text{.}\) In this way, I find the rest of the Bessel functions of the first kind.
\begin{equation*} J_{-\nu} = \sum_{n=0}^\infty \frac{(-1)^n}{n! \Gamma(1+n-\nu)} \left( \frac{t}{2} \right)^{2n-\nu} \end{equation*}
One caution should be noted. If \(n \in \NN\text{,}\) this \(J_{-n}\) gives an undefined denominator, hence doesn’t define a series. In these cases, the method of Frobenius gives \(J_{-n} = (-1)^n J_{n}\text{.}\) This is not necessarily surprising: in the Method of Frobenius, is the roots of the indicial equation differ by an integer, there may not be linearly independent solutions. If \(\nu \notin \ZZ\text{,}\) then there are two linearly independent solutions, as expected with the Method of Frobenius.

Definition 4.6.5.

Let \(n \notin \ZZ\text{.}\) The Bessel functions of the second kind of order \(\nu\) are written \(Y_{\nu}\) and defined by the following expression.
\begin{equation*} Y_{\nu} = \frac{\cos (\nu \pi) J_{\nu}(t) - J_{-\nu}(t)}{\sin \nu \pi} \end{equation*}
This definition is a linear combination of Bessel functions of the first kind. For \(\nu \in \ZZ\text{,}\) this linear combination is just a multiple of the one solution (since the Method of Frobenius only produced one solution). To get around this, for \(\nu \in \ZZ\text{,}\) the the Bessel function of the second kind are defined as a limit.
\begin{equation*} Y_{\nu}(t) = \lim_{\alpha \rightarrow \nu} Y_{\alpha}(t) \end{equation*}
L’Hopital’s rule shows that this limit exists. The Bessel functions of the second kind are shown in Figure 4.6.4.
To summarize, for any \(\nu \geq 0\) we have \(J_{\nu}\) and \(Y_{\nu}\text{,}\) two linearly independent solutions to the differential equation. The general solution is
\begin{equation*} y = A J_{\nu} + B Y_{\nu}\text{.} \end{equation*}

Subsection 4.6.3 The Aging Spring

Consider the spring equation from before.
\begin{equation*} my^{\prime \prime} + by^\prime + ky = 0 \end{equation*}
Now, instead of all constants, lets assume this is an aging spring. That is, at time passes, the spring constant decreases. One model is exponential decay, so let \(k(t) = ke^{-\alpha t}\) for \(\alpha > 0\text{.}\) Let’s see what happens without friction.
\begin{equation*} m y^{\prime \prime} + ke^{-\alpha t} = 0 \end{equation*}
Here’s a strange change of variables.
\begin{align*} s \amp = \frac{2}{\alpha} \sqrt{\frac{k}{m}} e^{\frac{-\alpha t}{2}} \end{align*}
I want to know how to change the differential equation. This is pretty difficult to do; I have to go through some derivative acrobatics to make it work. First I differentiate the change of variables function \(s(t)\text{.}\)
\begin{align*} \frac{ds}{dt} \amp = \frac{2}{\alpha} \frac{-\alpha}{2} \sqrt{ \frac{4k}{m\alpha^2}} e^{\frac{-\alpha t}{2}} = - \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}}\\ \frac{d^2s}{dt^2} \amp = \frac{\alpha}{2} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \end{align*}
Now I want to find an expression for \(\frac{d^2 y}{dt^2}\) in terms of some new function \(x(s)\) instead of \(y(t)\text{.}\) The function I need is implicity defined as follows.
\begin{equation*} x(s(t)) = y(t) \end{equation*}
This is implicit; \(x\) is the function that, when compsed with the change of variables \(s(t)\text{,}\) give the original function \(y\text{.}\) Since the composition is invertible, this function will always exist (assuming reasonable differentiability condition of \(y\text{,}\) which are already required since it is the solution of a differential equation.) There is a technical theorem in analysis called the Implicit Function Theorem that guarantees existence of functions like this. In any case, I want to differentiate this new function. I need the chain rule to do so.
\begin{align*} \frac{dx}{dt} \amp = \frac{dx}{ds} \frac{ds}{dt} \end{align*}
Then I want to differentiate again. Here I need the produce rule and the chain rule. After the derivatives, I bracket the result in a useful way.
\begin{align*} \frac{d^2 x}{dt^2} \amp = \frac{d}{dt} \left( \frac{dx}{ds} \frac{ds}{dt} \right)\\ \amp = \frac{d^2 s}{dt^2} \frac{dx}{ds} + \frac{ds}{dt} \frac{d}{dt} \frac{dx}{ds}\\ \amp = \frac{d^2 s}{dt^2} \frac{dy}{ds} + \frac{ds}{dt} \frac{d^2y}{ds^2} \frac{ds}{dt}\\ \amp = \frac{d^2 s}{dt^2} \frac{dx}{ds} + \left( \frac{ds}{dt} \right)^2 \frac{d^2 x}{ds^2} \end{align*}
I can calculate the derivatives of the change of variables \(s(t)\) and replace them in this expression.
\begin{align*} \amp = \frac{\alpha}{2} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \frac{dx}{ds} + \frac{k}{m} e^{-\alpha t} \frac{d^2 x}{ds^2} \end{align*}
Now, the function \(x(s(t)) = y(t)\) is the same as the function \(y(t)\text{,}\) as a function of \(t\text{;}\) that was exactly how it was defined. Therefore, I can replace \(y\) and \(y^{\prime \prime}\) in the original equation with the \(t\) derivatives of \(x\text{.}\) Let me do so, and proceed to do some algebra with the result.
\begin{align*} m y^{\prime \prime} + ke^{-\alpha t} \amp = 0\\ m \left(\frac{d^2 s}{dt^2} \frac{dx}{ds} + \left( \frac{ds}{dt} \right)^2 \frac{d^2 x}{ds^2} \right) + ke^{-\alpha t} \amp = 0\\ \frac{m\alpha}{2} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \frac{dx}{ds} + m \frac{k}{m} e^{-\alpha t} + k e^{-\alpha t} x \amp = 0 \end{align*}
To write this in nicer way, I divide by \(\alpha^2\) and multiply by \(4\text{.}\)
\begin{align*} \frac{2}{\alpha} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \frac{dx}{ds} + \frac{4}{\alpha^2} \frac{k}{m} e^{-\alpha t} \frac{d^2 x}{ds^2} + \frac{4k}{m\alpha^2} e^{-\alpha t} y \amp = 0 \end{align*}
Then the coefifcient of the second derivative of precisely \(s(t)^2\) and the coefficient of the first derivative is precisely \(s(t)\text{.}\)
\begin{align*} s^2 \frac{d^2x}{ds^2} + s \frac{dx}{ds} + s^2 y \amp = 0 \end{align*}
I have now produced a new differential equation for \(x(t)\) in the new variable \(s\text{.}\) If I can solve this equation, then by simply replace \(s\) with the change of variables expression \(s(t)\text{,}\) I can recover \(y(t) = x(s(t))\text{.}\)
The differential equation I produced for \(x(s)\) just happens to be Bessel’s equation with \(\nu = 0\text{.}\) It is solved by Bessel’s functions (of the first and second kind) for \(\nu = 0\text{.}\) I can write those solutions and replace \(s\) with \(s(t)\) to find the original solution \(y\)
\begin{align*} y \amp = A J_0(s) + B Y_0(s)\\ \amp = A J_0 \left( \frac{2}{\alpha} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \right) + B Y_0 \left( \frac{2}{\alpha} \sqrt{\frac{k}{m}} e^{-\frac{\alpha t}{2}} \right) \end{align*}
These functions completely describe the behaviour of an aging spring with exponential decay of the spring constant.

Subsection 4.6.4 Properties of the Bessel Functions

The Bessel functions have some nice symmetry properties for integer orders. Let \(m \in \ZZ\text{.}\)
\begin{align*} J_{-m}(t) \amp = (-1)^m J_m(t)\\ J_m(-t) \amp = (-1)^m J_m(t) \end{align*}
I mentioned a functional equation for the Legendre polynomials in Section 4.3. I have one here as well; this one is true for arbitrary \(\nu\text{.}\)
\begin{equation*} tJ_{\nu}^\prime(t) = \nu J_{\nu}(t) - t J_{\nu+1}(t) \end{equation*}
There are some interesting properties of Bessel functions with half-integer orders. Look closely again at \(J_{\frac{1}{2}}\text{.}\)
\begin{align*} J_{\frac{1}{2}}(t) \amp = \sum_{n=0}^\infty \frac{(-1)^n}{n! \Gamma(\frac{3}{2} + n))} \left( \frac{t}{2} \right)^{2n + \frac{1}{2}} \end{align*}
I know the values for half integer values of \(\Gamma\) from above; I can replace the \(\Gamma\) function here with its values.
\begin{align*} \Gamma \left( \frac{3}{2} + n \right) \amp = \frac{(2n+1)!}{2^{2n+1} n!} \sqrt{\pi}\\ J_{\frac{1}{2}}(t) \amp = \sum_{n=0}^\infty \frac{(-1)^n} {\frac{(2n+1)!}{2^{2n+1}n!} \sqrt{\pi}} \left( \frac{t}{2} \right)^{2n + \frac{1}{2}} \end{align*}
Then let me cancel some terms and do some simplifications.
\begin{align*} \amp = \sqrt{\frac{2}{\pi t}} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} t^{2n+1}\\ \amp = \sqrt{\frac{2}{\pi t}} \sin t \end{align*}
I could do the same for \(\nu = \frac{-1}{2}\text{,}\) with a similar result.
\begin{align*} J_{-\frac{1}{2}} \amp = \sqrt{\frac{2}{\pi t}} \cos t \end{align*}
These special half-integer Bessel functions can actually be expressed as elementary functions. The decay of the amplitude of these waves is \(\frac{1}{\sqrt{t}}\) not \(e^{-t}\text{;}\) this decay is much slower than exponential decay. The half-integer Bessel functions are called spherical Bessel functions. The reason for that name is that they solve the wave equation in spherical coordinates. The wave equation in \(\RR^3\) is \(\nabla^2 A + k^2 A = 0\) for \(\nabla^2\) the Laplacian. (This is a construction from multivariable calculus; feel free to disregard this notation if you haven’t taken Calculus III or Calculus IV) Changing to spherical coordinates and restricting to the radial terms gives Bessel’s equation with half-integer order.
I can think of these half-integer Bessel functions as the standing waves of spherical harmonic systems. \(J_{\frac{1}{2}}\) is the first harmoinc, then there are higher harmonicsas well. The typical image of these spherical waves are the decaying amplitude ripples radiating from a stone dropped in a pond, or the standing wave of a drum head resonating at a certain pitch.