Skip to main content

Course Notes for Calculus I

Section 10.2 Optimization

Subsection 10.2.1 Extreme Values of Models

Now that I understand how to find extrema using derivatives, I can use this technique to solve optimization problems. An optimization problem is any problem in applied mathematics where the goal is the optimal value of function, either expressed as a minimum or a maximum. The method for finding extrema is unchagned; most of the challenge in optimization problems is translating the problem into an appropriate function so that we can use derivatives.

Example 10.2.1.

A very classic (if somewhat artificial and contrived) example of an optmization problem is maximing the area of a rectangle with fixed perimeter. Let’s say that \(P\) is the fixed perimeter and the rectangle has height \(h\) and length \(l\text{,}\) as in Figure 10.2.2.
Figure 10.2.2. A Fixed-Perimeter Rectangle
I want to maximize area, so I will eventually be differentiating an area function. However, the area function is \(A = ab\text{,}\) which has two variables. I need to use the perimeter restriction to eliminate one of the variables. I know \(P = 2a + 2b\) so \(a = \frac{P}{2} - b\text{.}\) If I substitute for \(a\) in the area function, I get a single variable area function \(A(b)\text{.}\)
\begin{equation*} A(b) = b \left( \frac{P}{2} - b \right) \end{equation*}
Then I can optimize. The derivative is \(A^\prime(b) = \frac{P}{2} - 2b\text{.}\) This vanishes when \(b = \frac{P}{4}\text{.}\) I can test that the critical point is a maximum. Unsurprisingly, the result shows that a square (where both \(b\) and \(a\) are exactly one quarter of the perimeter) maximizes area.

Subsection 10.2.2 Optimized Distances

Figure 10.2.3. Distance Between Points
To hopefully get to some less contrived examples, I want to use optimized distances as examples. In this section, I’ll be asking for the minimum distance between some fixed point and some locus in the plane. First, let me review distances between points in the plane. If I have points with coordinates \((a,b)\) and \((c,d)\text{,}\) the distance between then is given by the pythagoreaon theorem: \(\sqrt{(c-a)^2 + (b-d)^2}\) This is illustrated in Figure 10.2.3.
The distance function is a well defined function which we can use to do optimization problems. That said, the square root is difficult to work with, particular if I need to differentiate. For distance problems, I am going to make use of a very convenient trick. If two points are at an optimized distance (closest or most distance in some situation), then the square of the distance will also be smallest or largest. Obviously, the square of the distance is a different number, but it is optimized at the same place that distance is optmized. For this reason, I will optimize the square of distance: \((c-a)^2 + (b-d)^2\text{.}\) Having removed the square root, this is a much easier function to work with.
The setup of these distance problems is this: I have a fixed point \((a,b)\) and a locus in the variables \(x\) and \(y\text{.}\) The distance squared function from a point on the locus to the fixed point is
\begin{equation*} d(x,y) = (x-a)^2 + (y-b)^2 \end{equation*}
However, this is a function of two variables, which I don’t know how to work with. How do I solve this? I use the locus. I solve for one of the two variables (whichever seems more convenient) and then replace that variable in the distance squared function. That will give me a function only in \(x\) or \(y\text{.}\) Then I will proceed to optimze that function, which will give me either the \(x\) or \(y\) coordinate of the closest/farthest point. Finally, I use the locus to find the other coordinate of these points. I can illustrate this in an example.
Figure 10.2.4. A Distance Optimization

Example 10.2.5.

Let me ask which point on the parabola \(y = \frac{x^2}{4}\) is closest to the point \((4,2)\text{.}\) The distance squared from a point \((x,y)\) on the parabola to \((4,2)\) is given the distance squared function.
\begin{equation*} d = (4-x)^2 + (2-y)^2 \end{equation*}
I use the locus \(y = \frac{x^2}{4}\) to replace \(y\) with \(\frac{x^2}{4}\text{,}\) so that I have only one variable.
\begin{align*} d(x) \amp = (4-x)^2 + \left(2-\frac{x^2}{4} \right)^2\\ d(x) \amp = 16 - 8x + x^2 + 4 - x^2 + \frac{x^4}{16} = 20 - 8x + \frac{x^4}{16} \end{align*}
I then differentiate to find the critical points.
\begin{align*} d^\prime(x) \amp = -8 + \frac{x^3}{4}\\ d^\prime(x) \amp = 0 \implies \frac{x^3}{4} = 8 \implies x^3 = 32 \implies x = \sqrt[3]{32} \end{align*}
Once I have the critical point, I break up the domain (which is all \(\RR\) here, since any \(x\) value is possible) and look at the intervals.
\begin{align*} \amp \left(-\infty, \sqrt[3]{32} \right) \amp \amp \left( \sqrt[3]{32}, \infty \right) \\ \amp d^\prime(0) = -8 \amp \amp d^\prime(4) = 8\\ \amp d^\prime(x) \lt 0 \amp \amp d^\prime(x) \gt 0 \\ \amp \text{decreasing} \amp \amp \text{increasing} \end{align*}
I conclude that there is a minimum at \(x=\sqrt[3]{32}\text{.}\) Since \(y = \frac{x^2}{4}\text{,}\) the corresponding \(y\) value is \(\frac{(\sqrt[3]{32})^2}{4}\text{.}\) The closest point on the parabola to \((4,2)\) is \(P = \left( \sqrt[3]{32}, \frac{(\sqrt[3]{32})^2}{4} \right)\text{.}\) Figure 10.2.4 shows the outcome of the optimization.