11. Numerical Solutions of ODEs I

Local And Global Error

Local and Global Error in Numerical Solutions of ODEs I

students, imagine trying to predict where a moving car will be after a few seconds using only a few snapshots of its position 🚗. If the snapshots are not taken often enough, your prediction can drift away from the real path. That idea is at the heart of local and global error in numerical methods for ordinary differential equations, or ODEs.

In this lesson, you will learn how numerical methods like Euler’s method approximate solutions, why mistakes happen at each step, and how those mistakes add up over time. By the end, you should be able to explain the difference between local and global error, use the terminology correctly, and connect error ideas to the broader study of numerical solutions of ODEs.

What are we trying to approximate?

A differential equation describes how a quantity changes. A typical first-order ODE looks like this:

$$y' = f(t,y), \quad y(t_0)=y_0$$

Here, $y(t)$ is the exact solution we want, but in many real problems we cannot solve for $y(t)$ in a simple formula. Instead, we build a numerical approximation at discrete points:

$$t_0, t_1, t_2, \dots, t_n$$

with step size

$$h = t_{n+1} - t_n.$$

A method such as Euler’s method uses the slope $f(t_n,y_n)$ to step forward:

$$y_{n+1} = y_n + h f(t_n,y_n).$$

This gives an approximate value $y_n$ for the exact value $y(t_n)$. The difference between these two is where error enters the picture.

The main idea is simple: numerical methods do not give the exact solution; they give an approximation. The quality of that approximation depends on the method, the step size, and how errors behave over many steps.

Local error: the mistake made in one step

Local error measures the error introduced in a single step of the method, assuming the previous value was exact. This is an important idea because it isolates how well the method itself performs on one tiny move forward.

For a one-step method, the local truncation error is the error made when going from $t_n$ to $t_{n+1}$ using the exact value $y(t_n)$ instead of the approximate value $y_n$.

For Euler’s method, if we start from the exact solution at $t_n$, the method predicts

$$y(t_n) + h f(t_n, y(t_n)).$$

The actual exact value at the next point is $y(t_{n+1})$. So the local truncation error is

$$\tau_{n+1} = y(t_{n+1}) - \bigl(y(t_n) + h f(t_n, y(t_n))\bigr).$$

For Euler’s method, this local error is typically proportional to $h^2$ for each step. That means if the step size is cut in half, the local error becomes about one-fourth as large. This happens because Euler’s method uses a straight-line approximation to follow a curve, and curved behavior is not captured perfectly over one step.

Example of local error in action

Suppose an object moves according to an ODE where the true solution bends downward. Euler’s method uses the slope at the beginning of the interval and extends that line across the whole step. If the curve bends away from that line, the estimate at the next step will miss the exact value a little. That miss is the local error. 📉

Think of walking on a curving path while only looking at the direction of the path at your feet. A short step is usually safe, but a longer step can take you far from the path.

Global error: the total accumulated difference

Global error is the difference between the exact solution and the numerical approximation at a given point after many steps:

$$e_n = y(t_n) - y_n.$$

Unlike local error, global error includes everything that happened before: all the small errors made at earlier steps, plus how those errors influenced later computations.

This makes global error more important for understanding whether a numerical method gives a reliable answer over a long interval. Even if each step is fairly accurate, many small mistakes can accumulate.

For Euler’s method, the global error is typically proportional to $h$ over a fixed interval. That means Euler’s method is a first-order method globally. If the step size is halved, the overall error is expected to shrink by about a factor of $2$, not $4$.

Why local and global error are different

This difference is easy to miss, students, but it is very important.

  • Local error is the error from one step, assuming no previous mistakes.
  • Global error is the error in the final computed solution after all steps.

A method can have a small local error and still build up a noticeable global error if many steps are taken. This is why numerical analysis always studies both.

How errors accumulate in Euler’s method

Let us connect the two ideas directly. Euler’s method updates values using

$$y_{n+1} = y_n + h f(t_n,y_n).$$

Suppose the current approximation $y_n$ already contains some global error. Then the next slope $f(t_n,y_n)$ is computed from a slightly wrong value, which can affect the next step too. So one error can influence the next one, and the next one after that.

This accumulation is one reason that global error is not just a sum of local errors in a simple way. The earlier errors can be amplified, reduced, or shifted by the behavior of the differential equation itself.

A simple intuition

Imagine checking your location on a map while driving, but each time your GPS is off by a tiny amount. If each new turn is based on the GPS position, then the next decision can be affected by the previous mistake. After several turns, your route may drift away from the correct road. That is the same basic idea as global error in a numerical method.

Step size, accuracy, and error control

The step size $h$ has a major effect on both local and global error.

  • Smaller $h$ usually gives better accuracy.
  • Smaller $h$ means more steps and more computation.
  • Larger $h$ is faster, but it may produce larger errors.

For Euler’s method, reducing $h$ improves the result, but only linearly in terms of global error. This is one reason higher-order methods are often preferred in practice.

A useful numerical analysis idea is that error behavior helps you choose an appropriate step size. If a problem requires high accuracy, a tiny step size may be needed. If only a rough prediction is necessary, a larger step size may be enough.

Real-world example

Suppose engineers are modeling the temperature of a cooling object using an ODE. If the temperature is checked every hour instead of every minute, the method may miss important changes. The local error in each hour is larger, and over a full day the global error may become too large for reliable prediction. Choosing $h$ is therefore part of balancing accuracy and efficiency.

Error, convergence, and reliability

A method is said to converge if its numerical solution approaches the exact solution as $h \to 0$.

For a convergent method, both local and global error should shrink as the step size gets smaller. But they shrink at different rates.

For Euler’s method:

  • local error per step is typically $O(h^2)$,
  • global error over a fixed interval is typically $O(h)$.

These rates matter because they tell us how much improvement to expect when we refine the step size.

If a method is stable and consistent, then reducing the step size should make the numerical approximation more trustworthy. That is why local and global error are not just abstract terms; they are tools for judging whether a method will actually work well on a real problem.

Local and global error in the bigger picture

Local and global error are part of the broader study of numerical solutions of ODEs. They connect directly to other important ideas such as stability, consistency, and convergence.

  • Consistency means the numerical method matches the differential equation more closely as $h \to 0$.
  • Stability means small disturbances do not grow uncontrollably.
  • Convergence means the approximate solution gets closer to the exact solution as the step size decreases.

These ideas work together. A method can have a small local error, but if it is unstable, the global error may still become large. So local error tells us how good one step is, while stability helps explain whether those step-by-step errors remain under control over time.

Conclusion

students, local and global error are core ideas in numerical analysis because they explain how approximation works and why accuracy changes over time. Local error measures the mistake made in one step of a method, assuming the previous value is exact. Global error measures the total difference between the exact and computed solutions after many steps. In Euler’s method, local error is typically $O(h^2)$ while global error is typically $O(h)$, which shows that many small stepwise errors can build up into a larger overall difference.

Understanding these errors helps you choose step sizes, compare methods, and judge whether a numerical solution is reliable. This is a key part of solving ODEs numerically and connects directly to stability and convergence in the rest of Numerical Analysis. ✅

Study Notes

  • A differential equation gives a rule for how a quantity changes, and a numerical method approximates its solution at discrete points.
  • Euler’s method uses

$$y_{n+1} = y_n + h f(t_n,y_n).$$

  • Local error is the error made in one step, assuming the starting value for that step is exact.
  • Global error is the difference

$$e_n = y(t_n) - y_n$$

between the exact solution and the computed approximation after many steps.

  • For Euler’s method, local error is typically $O(h^2)$ and global error is typically $O(h)$.
  • Smaller step size $h$ usually reduces error, but increases the number of computations.
  • Local error and global error are different because global error includes the accumulation of earlier mistakes.
  • Error analysis helps determine whether a numerical method is accurate enough for a real problem.
  • Local and global error are closely connected to consistency, stability, and convergence.
  • A numerically stable method helps prevent small errors from growing too quickly.
  • The main takeaway: one small error may seem harmless, but many steps can make it matter a lot.

Practice Quiz

5 questions to test your understanding

Local And Global Error — Numerical Analysis | A-Warded