Before moving on with other techniques for solving first order equations, this is a nice place to take a pure-mathematical detour and talk about existence and uniqueness of solutions. This is very valuable material, even for the purposes of applied mathematics. To be able to analyze a differential equation and determine whether or not a solution should exist and whether it is unique is necessary for robust modelling with DEs. I’m going to deal only with first order equations where I can isolate the derivative term; that is, equations of the following form.
The details of existence and uniqueness theorems rely on the properties of \(F\) as a function of two variables, which are very briefly summarized in Section 1.1.
Subsection2.5.1Existence
Existence of solutions to first order DES is established by the Peano Existence Theorem.
if \(F\) is continuous in both variables in an open set \(U \subset \RR^2\) and if \((x_0,y_0) \in U\text{,}\) then there exists \(\epsilon > 0\) such that the initial value problem associated to the DE and the initial condition \(y(x_0) = y_0\) has a solution with domain at least \([x_0-\epsilon,x_0+\epsilon]\text{.}\)
This is a very local result: the small positive \(\epsilon\) only guarantees a tiny piece of a function as a solution. Existence (and later uniqueness) are only guaranteed very close to the initial value of the function. I do know that the function is differentiable in this small inteval, but I don’t know anything else: outside the interval, anything could happen with the solution.
It’s also useful to note, particularly for students with experience in other senior mathematics classes, that this theorem relies on topological considerations: I need an open subset \(U\) where \(F\) is continuous. I’m not going to get into topology in this course; I’m not even going to define an open set in \(\RR^2\text{.}\) I just thought it was worth pointing out the topological considerations for those who have seen these definitions elsewhere.
This DE satisfies the conditions of the theorem, so a solutions exists. However, there are two solutions: \(y=0\) and \(y=\frac{x^3}{27}\text{.}\) Peano’s theorem ensures that a solution exists, but doesn’t imply the uniqueness of a solution.
This \(F\) is also continuous near \((0,0)\text{,}\) so a solution exists by Peano’s theorm. This IVP is solved by \(y=0\) and \(y = \frac{x^2}{4}\text{.}\) It’s obvious that something else is needed to ensure uniqueness.
Before I move on to the next result, I could wonder about the proof of the Peano Existence Theorem. Unforunately, that proof doesn’t fall within the scope of this course. I would need to establish a number of new definitions and techniques from real analysis, as well as struggle through a bunch of tricky \(\epsilon\) and \(\delta\) arguments. It’s interesting stuff, but unforutnately not part of this course.
Subsection2.5.2Lipschitz Continuity
In order to state the theorem about uniqueness, I need a new definition of continuity.
Definition2.5.4.
A function from an open interval \(U\) in \(\RR\) to \(\RR\) is called Lipschitz continuous if \(\exists K > 0\) such that
This is a strange kind of continuity. The definition is stronger than normal: Lipschitz continuity implies normal continuity. Moreover, the definition is global over \(U\text{,}\) not just local like conventional continuity. Therefore, the domain matters: a function might be Lipschitz continuous on a small interval, but not on a larger one.
As an interpretation, Lipschitz continuity is a control on the growth over a specific interval. For a Lipschitz continuous function, there is a linear function that bounds the function on the designated interval. Unsurprisingly, this means that the defintion usually only works on bounded sets. For a more visual interpretation, the definition compares \(f\) to some other linear function \(g(x) = K|x|\text{.}\) The graph of \(g\) gives a cone in \(\RR^2\) and the \(f\) must stay inside this cone over an interval to be Lipschitz continuous. In this way, the definition limits the local growth of \(f\) and its derivatives.
Example2.5.5.
Here are some examples to illustrate the idea of Lipschitz continuity.
\(f(x) = x\) is Lipschitz continuous for \(K=1\) on any interval, since it is bounded by itself, a linear function.
\(f(x) = x^2\) is Lipschitz continuous on any bounded interval. Specifically, on \((-7,7)\) we can take \(K=7\text{,}\) since \(-7x \leq x^2 \leq 7x\) on this interval. However, \(f(x) = x^2\) is not Lipschitz continuous on all of \(\RR\text{,}\) since no linear function bounds it.
\(f(x) = x^{\frac{2}{3}}\) is not Lipschitz continuous on \((-1,1)\text{,}\) since its slope gets arbitrarily steep near the origin. This means that very close to \(0\text{,}\) it cannot be bounded by any linear function through \((0,0)\text{.}\)
The example \(f(x) = x^{\frac{2}{3}}\) was not Lipschitz continuous at \(0\text{,}\) and it also failed to be differentiable at that point. I might wonder if differentiability is a sufficient condition. That would be convenient, since I know how to check for differentiability. However, consider a strange example.
This isn’t Lipschitz continuous, but it is differentiable near zero, showing that differentiability isn’t sufficient. However, there is some good news: this example is a strange aberation.
Proposition2.5.7.
A function which is \(C^1\) at a point \(a\) in its domain is also Lipschitz continuous on a small interval containing \(a\text{.}\)
\(f \in C^1\) is roughly equivalent to saying that \(\frac{\del f}{\del y}\) must exist and be bounded. This last criterion is the one I will use in practice: To check Lipschitz continuity, I can check if the derivatives exists and if it is bounded.
Consider the \(f(x) = x^2\text{.}\) The derivaitve is \(2x\text{,}\) which is bounded locally near \(0\text{.}\)\(f(x) =
x^\frac{2}{3}\) has derivative \(\frac{2}{3}
x^{\frac{-1}{3}}\) which is unbounded near 0. The former function is Lipschitz continuous on finite intervals centred at the origin and the later is not. Likewise, the problem with Example 2.5.6 is that the derivative isn’t bounded.
Subsection2.5.3Uniqueness
The theorem for uniqueness is called the Picard-Lindelöf thereom. It is a small improvement and adjustment of the Peano existence theorem.
Theorem2.5.8.
If \(F\) is Lipschitz continuous in y and continuous (in the ordinary sense) in x on an open set \(U\) in \(\RR^2\) and if \((x_0,y_0) \in U\) then the initial value problem
has a unique solution \(y=f(x)\) defined on the domain \([x_0-\epsilon,x_0+\epsilon]\) for some \(\epsilon >
0\text{.}\)
All of the comments from the previous theorem apply here. The result is very local, relies on topology, and the proof is beyond the scope of the course. Before moving to examples, let me make an important point about the direction of implication in this theorem. The theorem says that the conditions (continuity in \(x\) and Lipschitz continuity in \(y\)) guarantee a unique solution. This is only a one-directional implication. A DE with a unique solution does not necessarily have to be Lipschitz continuous in \(y\text{.}\) In particular, this cannot be used to prove multiple solutions by showing the failure of Lipshchitz continuity, since the theorem doesn’t include that second implication.
Example2.5.9.
\begin{equation*}
\frac{dy}{dx} = x \sqrt{y}
\end{equation*}
Let me considered the same example as before. This DE had multiple solutions for initial value \(y(0) = 0\text{.}\) The function \(F(x,y) = x\sqrt{y}\) has \(y\) derivative \(\frac{\del F}{\del y} = \frac{x}{2\sqrt{y}}\text{,}\) which is unbounded near \(0\text{.}\) Therefore, it is not Lipschitz continuous and doesn’t satisfy the continious of the Picard-Lindelöf theorem, meaning that I can’t be assured of a unique solution. (Again, the failure of Lipschitz continuity does guarantee multiple solutions, just allows for the possibility.)
Lastly, anticipating the equations in Section 2.6, consider the general first-order linear DE where \(P\) and \(Q\) are continuous functions.
\begin{equation*}
\frac{dy}{dx} = Q(x) - P(x) y
\end{equation*}
In this case, \(F(x,y) = Q(x) - P(x)y\) and the \(y\) derivative is \(-P(x)\text{,}\) which is bounded assuming that \(P\) is continuous. I can apply the Picard-Lindelöf theorem to conclude that linear equations have unique solutions whenever their coefficient functions \(P\) and \(Q\) are continuous. I can go into the next section of the course with assurance that unique solutions will exist, as long as \(P\) is continuous. This is yet another way in which linear equations are the easier, more accessible part of DEs.