11. Numerical Solutions of ODEs I

Stability Considerations

Stability Considerations in Numerical Solutions of ODEs I

students, in this lesson you will learn why some numerical methods for ordinary differential equations behave nicely while others can suddenly produce wildly wrong answers 😮. The main focus is stability: how a method reacts when small errors are present. These errors may come from rounding, imperfect starting values, or the numerical method itself.

What you will learn

  • What stability means in numerical methods for ordinary differential equations.
  • Why a method can give accurate results for one step but still fail over many steps.
  • How Euler’s method connects to stability considerations.
  • How to recognize when a step size is too large.
  • Why stability is important in the broader topic of numerical solutions of ODEs.

Why stability matters

When solving an ordinary differential equation numerically, we do not get the exact solution curve. Instead, we compute a sequence of values like $y_0, y_1, y_2, \dots$ that should approximate the true solution at points $x_0, x_1, x_2, \dots$.

But every computation introduces tiny errors. These can come from:

  • rounding numbers on a computer,
  • approximating derivatives with formulas,
  • using a step size $h$ that is not small enough,
  • starting with approximate initial data.

A method is called stable if these small errors do not grow too fast as the computation moves forward. If errors grow dramatically, the numerical solution can become unreliable even if the formula used is correct.

A useful real-world comparison is balancing a pencil on your finger. A tiny push may either settle back down or tip over completely. Stability asks: does the method behave like a pencil that settles, or one that falls over? 🖊️

Stability in the simplest Euler method

Euler’s method is one of the first numerical methods studied for ODEs. For an initial value problem

$$

$\frac{dy}{dx}=f(x,y), \qquad y(x_0)=y_0,$

$$

Euler’s method updates the solution using

$$

$y_{n+1}=y_n+h f(x_n,y_n).$

$$

Here $h$ is the step size. The method moves from one point to the next using the slope at the current point.

To understand stability, it helps to study the test equation

$$

$\frac{dy}{dx}=\lambda y,$

$$

where $\lambda$ is a constant. This equation is important because it models growth or decay. Its exact solution is

$$

$y(x)=Ce^{\lambda x}.$

$$

If $\lambda<0$, the true solution decays toward $0$. If a numerical method is stable, it should also reflect that decay rather than turning it into growth.

For Euler’s method, applying the update to $\frac{dy}{dx}=\lambda y$ gives

$$

$y_{n+1}=(1+h\lambda)y_n.$

$$

This shows that each step multiplies the current value by $1+h\lambda$.

If the exact solution is decaying, we want the numerical values to decay too. That happens when repeated multiplication does not blow up. The key condition is

$$

$|1+h\lambda|<1.$

$$

This is the basic stability condition for Euler’s method on the test equation.

What the stability condition means

Suppose $\lambda=-2$. Then Euler’s method gives

$$

$y_{n+1}=(1-2h)y_n.$

$$

If $h=0.4$, then

$$

$1-2h=0.2,$

$$

so the magnitude is less than $1$, and the values shrink toward $0$. That is stable.

If $h=1$, then

$$

$1-2h=-1,$

$$

so the values do not decay. They alternate sign and keep the same size. That is not a good approximation of the true decaying solution.

If $h=1.2$, then

$$

$1-2h=-1.4,$

$$

so the magnitude is greater than $1$, and the numerical values grow instead of decay. That is unstable.

This shows an important fact: a method can be mathematically correct and still fail if the step size is too large. Stability depends on both the method and the step size.

Local error, global error, and error growth

Stability is closely related to error, but it is not the same thing as local accuracy.

  • Local error is the error made in one step.
  • Global error is the accumulated error after many steps.

A method might have a small local error, but if each small error gets amplified, the global error can become large.

For example, imagine trying to walk in a straight line in fog. A tiny mistake in each step may seem harmless, but if you keep drifting in the same direction, you can end up far away from where you should be. That is what unstable error growth can do.

In numerical ODEs, stability helps control how errors are passed from one step to the next. A stable method limits the growth of these errors, while an unstable method magnifies them.

For Euler’s method on $\frac{dy}{dx}=\lambda y$, the factor $1+h\lambda$ determines whether errors shrink or grow. If

$$

$|1+h\lambda|<1,$

$$

then errors tend to shrink. If

$$

$|1+h\lambda|>1,$

$$

then errors tend to grow.

A deeper look at step size and stiffness

Some differential equations are especially sensitive to step size. These are often called stiff equations. In stiff problems, some components of the solution decay very quickly while others change slowly.

A simple example is still the test equation $\frac{dy}{dx}=\lambda y$ with a very negative $\lambda$. For instance, if $\lambda=-100$, the true solution decays very rapidly. Euler’s method then requires a very small $h$ to satisfy

$$

$|1+h\lambda|<1.$

$$

Since $\lambda$ is large in magnitude, the method may need a tiny step size to remain stable. If the step size is not small enough, the numerical solution can oscillate or blow up even though the exact solution is smooth and decaying.

This is one reason numerical analysts care about stability. A method is not just judged by how accurate it can be, but also by how practical it is for long simulations or difficult equations. In real applications like population models, chemical reactions, or electrical circuits, stability can decide whether a computation is useful or impossible.

How to reason about stability in practice

When students uses a numerical method, these questions help check stability:

  1. Does the exact solution grow, decay, or oscillate?
  2. Is the numerical method preserving that general behavior?
  3. Is the step size $h$ small enough to prevent error growth?
  4. Does changing $h$ slightly make the solution behave very differently?

A method is often called conditionally stable if it is stable only when $h$ satisfies certain restrictions. Euler’s method is a good example. It can work well for small enough $h$, but for some problems it becomes unstable if $h$ is too large.

This is why numerical methods are often tested on simple model problems before being used on difficult ones. The model problem helps reveal the method’s stability behavior.

Example: comparing stable and unstable behavior

Consider the differential equation

$$

$\frac{dy}{dx}=-5y, \qquad y(0)=1.$

$$

The exact solution is

$$

$y(x)=e^{-5x}.$

$$

This solution decays smoothly toward $0$.

Using Euler’s method,

$$

$y_{n+1}=(1-5h)y_n.$

$$

If $h=0.1$, then

$$

$1-5h=0.5,$

$$

so the values shrink by half each step. This is stable and matches the decaying trend.

If $h=0.5$, then

$$

$1-5h=-1.5,$

$$

so the numerical values alternate signs and grow in magnitude. That is unstable, and it does not resemble the exact solution at all.

This example shows that a too-large step size can completely hide the true behavior of the differential equation. Stability is therefore a protection against misleading results.

Conclusion

Stability is one of the most important ideas in numerical solutions of ODEs because it explains whether small errors stay controlled or get amplified 📘. In Euler’s method, stability can be studied using the model equation $\frac{dy}{dx}=\lambda y$, leading to the condition

$$

$|1+h\lambda|<1.$

$$

If this condition fails, the numerical solution may oscillate or explode, even when the exact solution is smooth and well-behaved. That is why step size, error growth, and the nature of the differential equation all matter together. students, understanding stability helps you choose methods wisely and interpret numerical results with confidence.

Study Notes

  • Stability asks whether small errors remain controlled as the computation advances.
  • Euler’s method updates with $y_{n+1}=y_n+h f(x_n,y_n)$.
  • The test equation $\frac{dy}{dx}=\lambda y$ is used to study stability.
  • For Euler’s method on $\frac{dy}{dx}=\lambda y$, stability requires $|1+h\lambda|<1$.
  • A method may have small local error but still have large global error if it is unstable.
  • Step size $h$ strongly affects stability, especially for equations with rapid decay.
  • Stiff problems often require very small $h$ for explicit methods like Euler’s method.
  • Stability is about long-term behavior, not just one-step accuracy.
  • Stable methods help preserve the true trend of the differential equation.
  • In practice, always check whether the chosen step size makes the numerical solution reliable.

Practice Quiz

5 questions to test your understanding

Stability Considerations — Numerical Analysis | A-Warded