Skip to main content
Logo image

Chapter 3 Long-term and singular behaviour

Section 3.1 Long-term behaviour

As we will explore further in subsequent chapters, we often create mathematical functions to model physical systems and patterns. We use models to predict the behaviour of a system, and one of the most natural questions to try to answer using a model is “What will happen a long time from now?” For some models, the answer might be that the system’s output approaches some “steady state” value. For other models, the answer might be that the system’s output grows without bound. In this section, we develop a process for detecting these patterns.

Subsection 3.1.1 Definition and first examples

Definition 3.1.1. Long-term behaviour.
Suppose \(f(t)\) is a function whose domain contains an infinite subdomain \(t \gt t_0\text{.}\)
  1. Unbounded growth.
    We write
    \begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to \infty \end{equation*}
    to mean that output values of \(f(t)\) eventually become and stay positive and arbitrarily large in magnitude.
  2. Unbounded decay.
    We write
    \begin{equation*} f(t) \to - \infty \qquad \text{as} \qquad t \to \infty \end{equation*}
    to mean that output values of \(f(t)\) eventually become and stay negative and arbitrarily large in magnitude.
  3. Approaching steady-state.
    For some steady-state output value \(L\text{,}\) we write
    \begin{equation*} f(t) \to L \qquad \text{as} \qquad t \to \infty \end{equation*}
    to mean that output values of \(f(t)\) eventually become and stay arbitrarily close to \(L\text{.}\)
Warning 3.1.2. Infinity is not a number.
There is a reason we write \(f(t) \to \infty\) instead of \(f(t) = \infty\text{.}\) Infinity is not a number, so it is not permitted as an output of a function. Instead, we use the “idea” of infinity to indicate unbounded behaviour.
There are words and phrases in Definition 3.1.1 that have special, technical meanings in mathematics, and it’s important to understand their technical meanings in order to be able to properly verify these definitions in more complex examples.
For the case of \(f(t) \to \infty\text{,}\) arbitrarily large means the function’s output values are able to grow larger than every proposed upper limit level, no matter how large, and stay above that level forever more. In this case, eventually means that for a particular proposed upper limit level, we may have to go very far out into the domain to achieve outputs that grow and stay larger than the proposed limit, and how far out we need to go to observe this behaviour will vary depending on the size of the proposed upper limit level that we need to beat. For the case of \(f(t) \to -\infty\text{,}\) we want to demonstrate that the function’s output values will eventually fall below every proposed lower limit level, and stay below that level forever more.
Arbitrarily close has a similar meaning as in the definition of Continuous at a point — it means that we are always able to restrict the function’s output values to within a small range around \(L\text{,}\) no matter how small, so that the function’s output values move and then stay within that small range forever more. Here, eventually has a similar meaning as before, where we may need to go very far out into the domain to achieve outputs that move and stay within the proposed restricted range, and how far out we need to go to observe this behaviour will vary depending on the size of the range.
Warning 3.1.3.
You should not take
\begin{equation*} f(t) \to L \qquad \text{as} \qquad t \to \infty \end{equation*}
to mean “eventually \(f(t) = L\text{,}\)” nor should you take it to mean “\(f(t)\) becomes close to \(L\) but never actually equals \(L\text{.}\)” Either situation could happen (though obviously not both simultaneously for a particular function), but neither needs to happen. For example, in Chapter 10 we will see examples of functions that oscillate between values above and below a certain level \(L\text{,}\) crossing that level as it moves from one to the other, while still “narrowing down” to becoming closer and closer to \(L\text{.}\)
Example 3.1.4. Unbounded growth of the square function.
For function \(f(t) = t^2\text{,}\) we have \(f(t) \to \infty\) as \(t \to \infty\text{.}\) By this we mean that no upper bound can be put on the growth of this function. For example the output values eventually get above \(100\text{,}\) as they stay above that level for all \(t \gt 10\text{.}\) And the output values also eventually get above \(100\) trillion, as they stay above that level for all \(t \gt {10}^7\text{.}\) And if \(M\) represents the largest number you can think of plus one, then the output values also grow above that level, staying above it for all \(t \gt \sqrt{M}\text{.}\)
Graph demonstrating that a parabola grows beyond any potential ceiling.
Figure 3.1.5. The function \(f(t) = t^2\) grows beyond any potential ceiling.
Example 3.1.6. Approaching steady-state.
Consider function
\begin{equation*} f(t) = 2 + \frac{1}{t^2} \text{.} \end{equation*}
Large values of \(t\) make the fraction part of the formula very small, and the larger \(t\) gets, the smaller that fraction gets. So we might guess that
\begin{equation*} f(t) \to 2 \qquad \text{as} \qquad t \to \infty\text{.} \end{equation*}
To verify this, we should consider different ranges around that proposed “steady-state” output level.
  • Do values of \(f(t)\) eventually move and stay within the range \(1 \lt f(t) \lt 3\text{?}\) Yes, they will do so for all \(t \gt 1\text{.}\)
  • Do values of \(f(t)\) eventually move and stay within the range \(1.99 \lt f(t) \lt 2.01\text{?}\) Yes, they will do so for all \(t \gt 10\text{.}\)
  • Do values of \(f(t)\) eventually move and stay within the range of \(2\) plus or minus one one-hundred-trillionth? Yes, they will do so for all \(t \gt {10}^7\text{.}\)
Ultimately, you can ask the above question with any small range
\begin{equation*} 2 - \epsilon \lt f(t) \lt 2 + \epsilon \end{equation*}
you like (where \(\epsilon\) represents a very small amount above or below \(2\)) and the answer will always be “Yes, for all \(t \gt 1/\sqrt{\epsilon}\text{.}\)” This verifies that output values of \(f(t)\) become arbitrarily close to \(2\) in the long-term.
Graph demonstrating that a shifted reciprocal power function eventually moves within every arbitrarily small range around a proposed approximate long-term value.
Figure 3.1.7. The function \(f(t) = 2 + 1/t^2\) eventually moves within every arbitrarily small range around a proposed approximate long-term value.
When steady-state long-term behaviour \(f(t) \to L\) occurs, as in Example 3.1.6, we often place a dashed horizontal line at that level when drawing the function’s graph.
Definition 3.1.8. Horizontal asymptote.
A horizontal line that a function’s graph approaches, becoming closer and closer to it as one goes further out on the graph.
A function’s graph approaching a horizontal asymptote.
Figure 3.1.9. The graph of \(f(t) = 2 + 1/t^2\) approaches a horizontal asymptote at height \(2\text{.}\)
Finally, here is an example where the long-term behaviour does not fall into any of the categories in Definition 3.1.1.
Example 3.1.10. Modelling pulsar emissions.
A pulsar is a type of star that only emits electromagnetic radiation from its magnetic poles. As a pulsar rotates, its emissions become visible to anyone in the path of its beam for a split second before the beam rotates away from the observer again. Essentially, a pulsar is an enormous strobe light in space. If we create a simple model function \(R(t)\) of the radiation received by an observer, we might come up with something like the graph in Figure 3.1.11 with there are repeating, short “on” and “off” periods, with a constant emission level of \(R_0\) during the “on” periods.
A graphical model of the observed on/off pattern of a pulsar.
Figure 3.1.11. A model of the observed on/off pattern of a pulsar.
It would not be appropriate to write either \(R(t) \to \pm \infty\) as \(t \to \infty\text{,}\) as clearly the functions values are neither growing nor decaying, only flip-flopping. And it would also not be appropriate to write either \(R(t) \to R_0\) or \(R(t) \to 0\) as \(t \to \infty\text{,}\) for the following reason. If we consider the range
\begin{equation*} \frac{R_0}{2} \lt R(t) \lt \frac{3 R_0}{2} \end{equation*}
which contains \(R_0\text{,}\) then the function’s output values will not move and stay within that range since after a very short time of moving within that bound it will once again flop out of the bound. And a similar argument can be made about the range
\begin{equation*} -\frac{R_0}{2} \lt R(t) \lt \frac{R_0}{2} \end{equation*}
containing \(0\text{.}\) No matter how far out on the timeline we go, we always see this same behaviour of not staying within a small range around any one particular output level.
In reality, a pulsar loses energy over tens of millions of years, slowing down and eventually “turning off.” So the fact that our mathematical analysis above is at odds with our expectations from knowledge of physics is a good indication that our simple model is only appropriate for short-term analysis, and a more sophisticated model should be used for long-term analysis.

Subsection 3.1.2 Fundamental long-term behaviour patterns

Here are some simple but fundamental patterns that we will use to analyze more complex examples, by recognizing these patterns as “pieces” of the more complex ones.
We can turn Pattern 3.1.15 into a more general pattern.
Warning 3.1.17. Converse of Pattern 3.1.16.
We have to be careful with attempting to reverse Pattern 3.1.16. If we have a function \(f(t)\) for which we know that
\begin{equation*} f(t) \to 0 \qquad \text{as} \qquad t \to \infty \text{,} \end{equation*}
it may be the case that the reciprocal function
\begin{equation*} g(t) = \frac{1}{f(t)} \end{equation*}
satisfies one of
\begin{equation*} g(t) \to \pm \infty \qquad \text{as} \qquad t \to \infty \text{,} \end{equation*}
but it may not. For example, it could be that \(f(t) \to 0\) occurs by oscillating between positive and negative values but with ever-decreasing “peaks,” making the reciprocal function \(g(t)\) oscillate between large positive and large negative output values. Even worse, the function \(f(t)\) may cross the \(0\) level an infinite number of times as it oscillates, making the output value of the reciprocal function \(g(t)\) undefined at all of those input values.

Subsection 3.1.3 Analyzing more complex examples

In general, we analyze more complex examples by breaking them into pieces.
Example 3.1.18. Combination of different behaviours.
Consider function
\begin{equation*} f(t) = 2 + \frac{1}{t^2} - t^2 \text{.} \end{equation*}
Individually, we know that
\begin{align*} 2 \amp \to 2 \amp \frac{1}{t^2} \amp \to 0 \amp t^2 \to \infty \end{align*}
as \(t \to \infty\text{.}\) So overall, the outputs of \(f(t)\) will get large in the long-term. However, noticing the minus sign in front of the \(t^2\) term in the formula for \(f(t)\text{,}\) we conclude
\begin{equation*} f(t) \to -\infty \qquad \text{as} \qquad t \to \infty \text{.} \end{equation*}
Example 3.1.19. A rational expression of “infinities”.
Consider rational function
\begin{equation*} f(t) = \frac{2 t^4 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \text{.} \end{equation*}
Based on Pattern 3.1.15, every nonconstant term in both numerator and denominator approaches one of \(\pm \infty\text{.}\) Will the two “infinities” in the denominator “cancel” each other because of the minus sign between them? Will the “infinities” in the numerator and denominator “cancel” each other, resulting in a ratio approaching \(1\text{?}\)
Obtaining the answers to both questions requires considering the relative sizes involved. We are not literally adding, subtracting, and dividing infinities in this expression — we are looking for a pattern in the actual output values for large but finite values of \(t\text{.}\) And for large powers of \(t\text{,}\) the larger the exponent, the larger the result. For example, when considering the
\begin{equation*} 2 t^4 - 3 t^3 \end{equation*}
in the numerator, \(t^4\) is much, much larger than \(t^3\) when a large \(t\) is input. So much larger, in fact, that even though \(3 t^3\) is large itself, subtracting it from \(2 t^4\) does not have much of an effect on the \(2 t^4\text{.}\) And if subtracting \(3 t^3\) has a negligible effect, we may as well also ignore the extra \(t + 4\) in the numerator, which is much smaller still. So we can actually write
\begin{equation*} 2 t^4 - 3 t^3 + t + 4 \approx 2 t^4 \end{equation*}
for large values of \(t\text{.}\) Similarly, in the denominator we can write
\begin{equation*} 3 t^4 + 4 t^3 + 6 \approx 3 t^4 \end{equation*}
for large values of \(t\text{.}\)
In this way, for large values of \(t\) we can focus on the dominant terms in numerator and denominator:
\begin{equation*} f(t) \approx \frac{2 t^4}{3 t^4} = \frac{2}{3} \text{.} \end{equation*}
The larger \(t\) gets, the more negligible the non-dominant terms become, and the more accurate the approximation above becomes, so we conclude that
\begin{equation*} f(t) \to \frac{2}{3} \qquad \text{as} \qquad t \to \infty \text{.} \end{equation*}
Warning 3.1.20. Infinity is not a number.
Do not perform arithmetic with infinities. In particular,
\begin{align*} \infty - \infty \amp \neq 0 \amp \frac{\infty}{\infty} \neq 1 \text{.} \end{align*}
We can make the concept of dominant terms more precise by using Pattern 3.1.15. (Or, when appropriate, Pattern 3.1.16.) We can do this by forming ratios inside the overall ratio to more directly compare “large” terms to see who is dominant.
Example 3.1.22. Determining dominant terms.
Let’s return to the function from Example 3.1.19. We identify the \(t^4\) as the largest expression in the denominator, so we will compare every other term in the ratio to that by dividing.
\begin{align*} f(t) \amp = \frac{2 t^4 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \\ \amp = \frac{2 t^4 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \cdot \frac{ 1/t^4 }{ 1/t^4 } \\ \amp = \frac{(2 t^4 / t^4) - (3 t^3 / t^4) + (t / t^4) + (4 / t^4)}{(3 t^4 / t^4) + (4 t^3 / t^4) + (6 / t^4)} \\ \amp = \frac{2 - (3 / t) + (1 / t^3) + (4 / t^4)}{3 + (4 / t) + (6 / t^4)} \text{.} \end{align*}
Note that the second line above is justified by the fact that for \(t \neq 0\) we have
\begin{equation*} \frac{ 1/t^4 }{ 1/t^4 } = 1 \text{,} \end{equation*}
and multiplying by \(1\) has no effect. In the last line, every term that still has a power of \(t\) in the denominator is eventually arbitrarily close to \(0\) by Pattern 3.1.15. With this knowledge, we can say
\begin{align*} f(t) \amp = \frac{2 - (3 / t) + (1 / t^3) + (4 / t^4)}{3 + (4 / t) + (6 / t^4)} \\ \amp \approx \frac{2 + 0 - 0 + 0}{3 - 0 - 0} \\ \amp = \frac{2}{3} \text{,} \end{align*}
and we come to the same conclusion
\begin{equation*} f(t) \to \frac{2}{3} \qquad \text{as} \qquad t \to \infty \end{equation*}
Checkpoint 3.1.23. Comparing dominant terms.
Carry out a similar analysis as in Example 3.1.22 for
\begin{align*} g(t) \amp = \frac{2 t^5 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \amp h(t) \amp = \frac{2 t^4 - 3 t^3 + t + 4}{3 t^5 + 4 t^3 + 6}\text{.} \end{align*}
How do the relative sizes of the dominant terms affect the result?

Subsection 3.1.4 Ancient behaviour

The question “What will happen a long time from now?” has a flipped version: if we have a model for how a system has behaved in recent experience, we might want to extrapolate backwards and ask “What happened a long time ago?” Similar to the patterns in Definition 3.1.1, we write
\begin{equation*} f(t) \to \pm \infty \qquad \text{as} \qquad t \to -\infty \end{equation*}
or
\begin{equation*} f(t) \to L \qquad \text{as} \qquad t \to -\infty \end{equation*}
when the function \(f(t)\) exhibits the appropriate behaviour for large but negative values of \(t\text{.}\) When verifying these behaviours in examples from the definitions for \(t \to -\infty\text{,}\) the process is essentially the same as that used in Example 3.1.4 and Example 3.1.6, except that we now take eventually to mean that the behaviour (above/below a certain proposed boundary level or within certain a proposed range around an approximate long-term value) must hold on a subdomain \(t \lt t_1\) for some large, negative \(t_1\text{.}\)
Most of the same patterns described in Subsection 3.1.2 still hold, with the following alterations.
  1. The plus-or-minus in Pattern 3.1.13 is flipped, so that a linear expression satisfies
    \begin{equation*} m t + q_0 \to \mp \infty \qquad \text{as} \qquad t \to -\infty \text{,} \end{equation*}
    where the minus-or-plus depends on whether the constant rate factor \(m\) is positive or negative.
  2. For (positive) integer exponents in Pattern 3.1.14, the pattern becomes
    \begin{equation*} t^m \to \pm \infty \qquad \text{as} \qquad t \to - \infty \text{,} \end{equation*}
    where the plus-or-minus depends on whether the exponent is even or odd.
    When the exponent is not an integer, then some care must be taken in even considering \(t \to -\infty\text{.}\) For example, if \(m = 1/2\) then there are no negative numbers in the domain of \(f(t) = t^{1/2} = \sqrt{t}\text{,}\) and so looking for a pattern in the output values for this function as \(t \to -\infty\) makes no sense.
  3. Since \(- 0 = 0\text{,}\) we don’t have the even/odd split in Pattern 3.1.15 that we have in Pattern 3.1.14, but the same warning about taking care when the exponent is not an integer applies.
Example 3.1.24. A rational expression of “infinities” for \(t \to -\infty\).
Let’s revisit
\begin{equation*} f(t) = \frac{2 t^4 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \end{equation*}
from Example 3.1.22. The algebraic analysis in that example still holds to allow us to say
\begin{equation*} f(t) \approx \frac{2 t^4}{3 t^4} = \frac{2}{3} \end{equation*}
for all large and negative values of \(t\text{.}\) This works because whether
\begin{equation*} t^m \to \infty \qquad \text{or} \qquad t^m \to -\infty \end{equation*}
as \(t \to -\infty\) (depending on whether the exponent \(m\) is even or odd), in both cases it will be true that
\begin{equation*} \frac{1}{t^m} \to 0 \qquad \text{as} \qquad t \to -\infty \text{.} \end{equation*}
So we can still use dominant terms to analyze the ancient behaviour of a rational function like this, and in this case we see that the ancient behaviour and the long-term behaviour are identical.
The graph of a function with a symmetric horizontal asymptote.
Figure 3.1.25. A function with a symmetric horizontal asymptote.
Warning 3.1.26. Ancient and long-term behaviour can be different.
We should not take from Example 3.1.24 that the behaviour as \(t \to -\infty\) will always be the same as the behaviour as \(t \to \infty\text{.}\)
Example 3.1.27. Different analyses may be required for \(t \to \pm \infty\).
For function
\begin{equation*} f(t) = \frac{\sqrt{2 t^2 + 1}}{3 t - 5} \text{,} \end{equation*}
first consider \(t \to \infty\text{.}\) Analyzing dominant terms, we may consider both the “plus one” in the numerator and the “minus five” in the denominator as negligible for large values of \(t\text{,}\) so that
\begin{align*} f(t) \amp \approx \frac{\sqrt{2 t^2}}{3 t} \\ \amp = \frac{t \sqrt{2}}{3 t} \\ \amp = \frac{\sqrt{2}}{3} \text{.} \end{align*}
So we conclude that
\begin{equation*} f(t) \to \frac{\sqrt{2}}{3} \qquad \text{as} \qquad t \to \infty \text{.} \end{equation*}
For \(t \to -\infty\) our analysis of dominant terms above remains the same, but there is one algebraic manipulation that is only valid for positive values of \(t\text{.}\) The simplification
\begin{equation*} \sqrt{2 t^2} = t \sqrt{2} \end{equation*}
is not valid for negative values of \(t\text{,}\) which is what we are considering when we analyze \(t \to -\infty\text{.}\) For example, for \(t = -1\) we have
\begin{align*} \sqrt{2 t^2} \amp = \sqrt{2 {(-1)}^2} \amp t \sqrt{2} \amp = (-1) \sqrt{2} \\ \amp = \sqrt{2} \amp \amp = -\sqrt{2} \end{align*}
and the two expressions are not equal. Ignoring the \(2\) for the moment, the correct simplification in all cases is
\begin{equation*} \sqrt{t^2} = \abs{t} \text{,} \end{equation*}
it’s just that when \(t\) is positive we can ignore the absolute value brackets.
Reworking our algebraic manipulations then, for large and negative values of \(t\) we have
\begin{align*} f(t) \amp \approx \frac{\sqrt{2 t^2}}{3 t} \\ \amp = \frac{\sqrt{2} \abs{t}}{3 t} \text{.} \end{align*}
Now,
\begin{equation*} \frac{\abs{t}}{t} = - 1 \text{,} \end{equation*}
when \(t\) is negative since the numerator and denominator have the same magnitude but one is positive and the other is negative. So we have
\begin{equation*} f(t) \approx - \frac{\sqrt{2}}{3} \end{equation*}
for large, negative values of \(t\) and we conclude that
\begin{equation*} f(t) \to - \frac{\sqrt{2}}{3} \qquad \text{as} \qquad t \to -\infty \text{.} \end{equation*}
The graph of a function with two different horizontal asymptotes.
Figure 3.1.28. A graph with two different horizontal asymptotes.

Subsection 3.1.5 Slant asymptotes

Sometimes we can be more informative about the patterns \(f(t) \to \pm \infty\text{.}\)
Example 3.1.29. An “eventually linear” function.
In your analysis of
\begin{equation*} g(t) = \frac{2 t^5 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \end{equation*}
in Checkpoint 3.1.23, you should have found that
\begin{equation*} g(t) \approx \frac{2 t}{3} \end{equation*}
for large values of \(t\text{,}\) whether positive or negative. From this we can conclude that
\begin{equation*} g(t) \to \infty \qquad \text{as} \qquad t \to \infty \end{equation*}
and
\begin{equation*} g(t) \to -\infty \qquad \text{as} \qquad t \to -\infty \text{.} \end{equation*}
However, the expression
\begin{equation*} g(t) \approx \frac{2}{3} t \end{equation*}
is actually more informative about exactly how \(g(t) \to \pm \infty\text{;}\) in particular, it says that \(g(t)\) is essentially linear for large values of \(t\text{,}\) growing at approximately constant rate \(2/3\text{.}\)
However, there may be an “offset” to how it grows linearly. To see this, let’s investigate more closely how the numerator and denominator polynomials relate. In the numerator, the only term that matters is the \(2 t^5\) since the other terms are “small” compared to the \(3 t^4\) in the denominator:
\begin{align*} g(t) \amp = \frac{2 t^5 - 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \\ \amp = \frac{2 t^5}{3 t^4 + 4 t^3 + 6} + \frac{- 3 t^3 + t + 4}{3 t^4 + 4 t^3 + 6} \\ \amp \approx \frac{2 t^5}{3 t^4 + 4 t^3 + 6} \text{,} \end{align*}
where the approximately equal is true for large \(t\text{.}\) Now let’s make the numerator look more like the denominator.
\begin{align*} g(t) \amp \approx \frac{2 t^5}{3 t^4 + 4 t^3 + 6} \\ \amp = 2 t \cdot \frac{t^4}{3 t^4 + 4 t^3 + 6} \\ \amp = \frac{2}{3} \, t \cdot \frac{3 t^4}{3 t^4 + 4 t^3 + 6} \\ \amp = \frac{2}{3} \, t \cdot \frac{3 t^4 + 4 t^3 + 6 - (4 t^3 + 6)}{3 t^4 + 4 t^3 + 6} \text{.} \end{align*}
In this last step, we have added and then immediately subtracted some terms to make the numerator look even more like the denominator, while maintaining equality with the previous expression. Continuing,
\begin{align*} g(t) \amp \approx \frac{2}{3} \, t \cdot \frac{3 t^4 + 4 t^3 + 6 - (4 t^3 + 6)}{3 t^4 + 4 t^3 + 6} \\ \amp = \frac{2}{3} \, t \cdot \frac{3 t^4 + 4 t^3 + 6}{3 t^4 + 4 t^3 + 6} - \frac{2}{3} \, t \cdot \frac{4 t^3 + 6}{3 t^4 + 4 t^3 + 6}\\ \amp = \frac{2}{3} \, t \cdot 1 - \frac{2}{3} \cdot \frac{4 t^4 + 6 t}{3 t^4 + 4 t^3 + 6}\text{.} \end{align*}
Looking at the dominant terms in the remaining rational expression, we have
\begin{align*} g(t) \amp \approx \frac{2}{3} \, t - \frac{2}{3} \cdot \frac{4}{3}\\ \amp = \frac{2}{3} \, t - \frac{8}{9} \text{.} \end{align*}
We conclude that for large \(t\text{,}\) the graph of \(g(t)\) will look like the line
\begin{equation*} y = \frac{2}{3} \, t - \frac{8}{9} \text{.} \end{equation*}
The graph of a function with a slant asymptote.
Figure 3.1.30. A graph with a slant asymptote. The graph will slowly approach the dashed line both to the upper right and the lower left.
Definition 3.1.31. Slant asymptote.
A line with non-zero slope that a function’s graph approaches, becoming closer and closer to it as one goes further out on the graph.

Section 3.2 Comparing growth of two functions

There are two simple algebraic ways we can compare two expressions \(A\) and \(B\text{:}\) using a difference \(A - B\) or using a ratio \(A / B\text{.}\) However, the result of a difference is dependent on both magnitude and the combination of signs of \(A\) and \(B\text{,}\) whereas a ratio tends to keep the matters of signs and relative magnitudes separate. Furthermore, we can more easily perform algebraic manipulations in a ratio without changing the actual value of it.
In the case of two functions \(f(t)\) and \(g(t)\) which both satisfy
\begin{align*} f(t) \amp \to \infty \amp g(t) \amp \to \infty \end{align*}
as \(t \to \infty\text{,}\) we can compare how fast each grows to \(\infty\) by forming a ratio of their output values and re-analyzing as \(t \to \infty\text{.}\) Effectively, this pits them against each other in a race to \(\infty\text{.}\)

Definition 3.2.1. Relative rates of growth.

Assume functions \(f(t)\) and \(g(t)\) both satisfy
\begin{align*} f(t) \amp \to \infty \amp g(t) \amp \to \infty \end{align*}
as \(t \to \infty\text{.}\)
  1. Faster growth.
    If
    \begin{equation*} \frac{f(t)}{g(t)} \to \infty \text{ as } t \to \infty \end{equation*}
    then we say that \(f(t)\) grows faster than \(g(t)\text{.}\)
  2. Slower growth.
    If
    \begin{equation*} \frac{f(t)}{g(t)} \to 0 \text{ as } t \to \infty \end{equation*}
    then we say that \(f(t)\) grows more slowly than \(g(t)\text{.}\)
  3. Comparable growth.
    If
    \begin{equation*} \frac{f(t)}{g(t)} \to L \text{ as } t \to \infty \end{equation*}
    where \(L\) is a positive, finite number, then we say that \(f(t)\) and \(g(t)\) grow at comparable rates.

Remark 3.2.2.

In the third case in Definition 3.2.1, it is not necessary for \(L\) to be \(1\) to be able to say that \(f(t)\) and \(g(t)\) grow at comparable rates — any other value for \(L\) represents an approximate scale factor between \(g(t)\) and \(f(t)\) as they grow in tandem.
We will revisit this concept of relative rate of growth several times in later chapters as we grow our stable of important functions. For now, here are some simple examples based on computations we have already performed.

Example 3.2.3. Growing at a faster rate.

We can re-interpret the calculations of Example 3.1.29 by setting
\begin{align*} f(t) \amp = 2 t^5 - 3 t^3 + t + 4 \amp g(t) = 3 t^4 + 4 t^3 + 6\text{.} \end{align*}
These two function satisfy
\begin{align*} f(t) \amp \to \infty \amp g(t) \amp \to \infty \end{align*}
as \(t \to \infty\text{.}\) In that previous example, we analyzed the long-term behaviour of the ratio of these two functions and found that
\begin{equation*} \frac{f(t)}{g(t)} \to \infty \qquad \text{as} \qquad t \to \infty \text{,} \end{equation*}
which tells us that \(f(t)\) grows faster than \(g(t)\text{.}\)

Example 3.2.4. Growing at a comparable rate.

We can re-interpret the calculations of Example 3.1.27 by setting
\begin{align*} f(t) \amp = \sqrt{2 t^2 + 1} \amp g(t) = 3 t - 5\text{.} \end{align*}
These two function satisfy
\begin{align*} f(t) \amp \to \infty \amp g(t) \amp \to \infty \end{align*}
as \(t \to \infty\text{.}\) In that previous example, we analyzed the long-term behaviour of the ratio of these two functions and found that
\begin{equation*} \frac{f(t)}{g(t)} \to \frac{\sqrt{2}}{3} \qquad \text{as} \qquad t \to \infty \text{,} \end{equation*}
which tells us that \(f(t)\) and \(g(t)\) grow at comparable rates, though eventually
\begin{equation*} f(t) \approx \frac{\sqrt{2}}{3} \, g(t) \text{.} \end{equation*}

Section 3.3 Singular behaviour

Subsection 3.3.1 Concept, definition, and basic examples

With a mathematical for a system in hand, another natural question to ask is “Are there any input values near which the output values become unbounded?” (For example, if you have a mathematical model of an energy supply, you might want to know if there are any particular operating states that cause it to create an enormous surge of energy.)
Definition 3.3.1. Singular behaviour.
  1. Growth singularity.
    We write
    \begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to c \end{equation*}
    to mean that output values of \(f(t)\) become arbitrarily large and positive for all inputs \(t\) sufficiently close to (but not equal to) \(c\text{.}\)
  2. Decay singularity.
    We write
    \begin{equation*} f(t) \to -\infty \qquad \text{as} \qquad t \to c \end{equation*}
    to mean that output values of \(f(t)\) become arbitrarily large and negative for all inputs \(t\) sufficiently close to (but not equal to) \(c\text{.}\)
In either case, we say that \(f(t)\) has a singularity at \(t = c\text{.}\)
In Definition 3.3.1, arbitrarily large means just the same as it did when we defined long-term behaviour. And sufficiently close means essentially the same as arbitrarily close, except
  1. the change from arbitrarily to sufficiently reflects the fact that we are in control of how close to \(c\) we need to restrict \(t\) to be in order to achieve the arbitrarily large outputs
  2. we don’t necessarily require a symmetric subdomain around \(t = c\text{.}\)
Example 3.3.2. The prototypical growth singularity.
Consider values of the function
\begin{equation*} f(t) = \frac{1}{t^2} \end{equation*}
near \(t = 0\text{.}\) We can’t actually evaluate \(f(0)\) because that involves division by zero. But if \(t \approx 0\) then \(f(t)\) is very large and positive.
  • Do values of \(f(t)\) grow at least as large as \(100\) for \(t \approx 0\text{?}\) Yes, they will do so for all \(t\) in the subdomain \(0 \pm 0.1\) (excluding \(t = 0\)).
  • Do values of \(f(t)\) grow at least as large as \(100\) trillion for \(t \approx 0\text{?}\) Yes, they will do so for all \(t\) in the subdomain \(0 \pm {10}^{-7}\) (excluding \(t = 0\)).
In fact, you can answer the above question for every proposed upper limit value \(M\) by restricting \(t\) to be within the subdomain
\begin{equation*} 0 \pm \frac{1}{\sqrt{M}} \end{equation*}
(excluding \(t = 0\text{,}\) as always). From this we conclude
\begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to 0 \text{,} \end{equation*}
and so \(f(t)\) has a singularity at \(t = 0\text{.}\)
Warning 3.3.3. Infinity is not a number.
As we’ve already warned before, you should not attempt to perform arithmetic with \(\infty\text{.}\)
Example 3.3.4.
Consider
\begin{equation*} f(t) = \frac{1}{t^4} - \frac{1}{t^2} \text{.} \end{equation*}
In Example 3.3.2, we have already demonstrated that
\begin{equation*} \frac{1}{t^2} \to \infty \qquad \text{as} \qquad t \to 0 \text{.} \end{equation*}
A similar analysis would show that
\begin{equation*} \frac{1}{t^4} \to \infty \qquad \text{as} \qquad t \to 0 \text{.} \end{equation*}
So it would seem that in the difference in the formula for \(f(t)\text{,}\) the two infinities would cancel, and we should have \(f(t) \approx 0\) for \(t \approx 0\text{.}\) But this is not the case, since if we use a common denominator we have
\begin{align*} f(t) \amp = \frac{1}{t^4} - \frac{1}{t^2} \\ \amp = \frac{1 - t^2}{t^4} \text{.} \end{align*}
For \(t \approx 0\text{,}\) the numerator of this new formula for \(f(t)\) is approximately \(1\text{,}\) but the denominator is very small and positive, so the ratio is very large. That is, \(f(t)\) has a singularity at \(t = 0\) with
\begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to 0 \text{.} \end{equation*}
Looking back at the original difference formula for \(f(t)\text{,}\) we can interpret this as saying that for \(t \approx 0\text{,}\) the size of \(1/t^4\) is so large as to make the subtraction of \(1/t^2\) negligible.

Subsection 3.3.2 One-sided singularities

For some functions, it is the case that the values become large and positive on one side of a singularity and large and negative on the other side. To handle this sort of situation, we make the following definition.
Definition 3.3.5. One-sided singular behaviour.
  1. Right-hand singularity.
    We write
    \begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to c^+ \end{equation*}
    to mean that output values of \(f(t)\) become arbitrarily large and positive for all inputs \(t\) sufficiently close to but strictly greater than \(c\text{.}\)
  2. Left-hand singularity.
    We write
    \begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to c^- \end{equation*}
    to mean that output values of \(f(t)\) become arbitrarily large and positive for all inputs \(t\) sufficiently close to but strictly less than \(c\text{.}\)
And we can make similar definitions for the meanings of \(f(t) \to -\infty\) as \(t \to c^+\) or as \(t \to c^-\text{.}\)
Example 3.3.6. A function with different one-sided singular behaviours.
Consider function
\begin{equation*} f(t) = \frac{t - 2}{t - 1} \text{.} \end{equation*}
For \(t \approx 1\) but greater than \(1\text{,}\) the numerator is approximately \(-1\) but the denominator is small and positive, making the ratio very large and negative. Hence
\begin{equation*} f(t) \to -\infty \qquad \text{as} \qquad t \to 1^+ \text{.} \end{equation*}
For \(t \approx 1\) but less than \(1\text{,}\) the numerator is still approximately \(-1\) but the denominator is now small and negative, making the ratio very large and positive. Hence
\begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to 1^- \text{.} \end{equation*}
Subsection Vertical asymptotes
Recall that, graphically, long-term behaviour
\begin{equation*} f(t) \to \mathrm{constant} \qquad \text{as} \qquad t \to \infty \end{equation*}
means the function’s graph is approaching a Horizontal asymptote at the height of the constant that the output levels are approaching. For singular behaviour
\begin{equation*} f(t) \to \infty \qquad \text{as} \qquad t \to \mathrm{constant} \end{equation*}
this is reversed, so that the function’s graph is approaching a vertical line at a specific value of \(t\text{.}\)
Definition 3.3.7. Vertical asymptote.
A vertical line that a function’s graph approaches, becoming closer and closer to it as the inputs are made closer to the location of the vertical line.
The graph of a function that displays the same behaviour on either side of a vertical asymptote.
(a) A function that displays the same behaviour on either side of a vertical asymptote. (From Example 3.3.2.)
The graph of a function that displays different behaviours on either side of a vertical asymptote.
(b) A function that displays different behaviours on either side of a vertical asymptote. (From Example 3.3.6.)
Figure 3.3.8. Two kinds of vertical asymptotes.