7. Numerical Integration II

Error Bounds

Numerical Integration II: Error Bounds

When students needs to estimate the area under a curve, numerical integration gives a fast and practical answer 📈. But one big question always comes up: how close is the approximation to the true value? That is where error bounds matter. In this lesson, students will learn how to measure and control the difference between an exact integral and a numerical estimate, how these ideas connect to common rules like the trapezoidal rule and Simpson’s rule, and why error bounds are a central part of Numerical Integration II.

Why error bounds matter

A numerical integration method gives an approximation to an integral such as $\int_a^b f(x)\,dx$. The exact value may be hard or impossible to compute directly, especially when $f(x)$ comes from data, engineering models, or complicated formulas. An error bound tells us how large the approximation error could be.

The error is usually written as

$$

$E = \text{true value} - \text{approximation}.$

$$

In practice, we often care about the size of the error, written as $|E|$. If a method says the error is at most $0.01$, then the approximation is guaranteed to be within that distance of the true answer.

This matters in real life. For example, if a bridge design calculation uses numerical integration to estimate stress over a beam, the engineer must know whether the approximation is accurate enough. If a science lab estimates total charge from a graph of current versus time, an error bound helps show how reliable the result is. In short, error bounds turn a numerical answer into a trustworthy one ✅.

Main ideas and key terminology

To understand error bounds, students should know a few important terms:

  • Integral: the exact total area or accumulation represented by $\int_a^b f(x)\,dx$.
  • Approximation: a numerical estimate of that integral.
  • Error: the difference between the true value and the approximation.
  • Error bound: a guaranteed upper limit for the size of the error.
  • Step size: the width of each subinterval, often written as $h$.
  • Subintervals: smaller pieces of the interval $[a,b]$ used in a composite method.
  • Smoothness: how many derivatives the function has and how well behaved they are.

The key idea is that many error formulas depend on derivatives of the function. This is because curvature and rapid change make approximations less accurate. A function that is nearly straight is easier to integrate numerically than one with sharp bends.

For example, if a function is smooth on $[a,b]$ and its second derivative is bounded, then the trapezoidal rule has a predictable error pattern. If a function is smoother still and its fourth derivative is bounded, Simpson’s rule can be even more accurate.

Error bounds for the trapezoidal rule

The trapezoidal rule replaces the curve by straight line segments. On one subinterval $[x_i,x_{i+1}]$, it approximates the area using a trapezoid. For a single interval, the rule is

$$

$T = \frac{b-a}{2}\bigl(f(a)+f(b)\bigr).$

$$

For the composite trapezoidal rule, students splits $[a,b]$ into $n$ equal parts, each of width

$$

$h = \frac{b-a}{n}.$

$$

Then the approximation becomes

$$

$T_n = \frac{h}{2}\left[f(x_0)+2f(x_1)+2f(x_2)+\cdots+2f(x_{n-1})+f(x_n)\right].$

$$

A standard error bound for the composite trapezoidal rule is

$$

$|E_T| \le \frac{(b-a)}{12}h^2\max_{a\le x\le b}|f''(x)|.$

$$

This formula tells several important stories at once:

  1. The error gets smaller when $h$ gets smaller.
  2. The error depends on the size of the second derivative.
  3. The method is usually more accurate when the function bends less.

A helpful way to read the formula is this: if students halves the step size $h$, the error bound is reduced by about a factor of $4$, because of the $h^2$ term. That is a big improvement 👍.

Example: trapezoidal error idea

Suppose $f(x)=x^2$ on $[0,1]$. Since $f''(x)=2$ everywhere, the maximum of $|f''(x)|$ is $2$. If students uses the composite trapezoidal rule with $n=4$, then $h=\frac{1}{4}$. The bound becomes

$$

$|E_T| \le \frac{(1)}{12}\left(\frac{1}{4}\right)^2(2)=\frac{1}{96}.$

$$

So the error is guaranteed to be at most $\frac{1}{96}$. This does not mean the actual error is exactly that much; it only gives a safe upper limit.

Error bounds for Simpson’s rule

Simpson’s rule is based on fitting parabolas to the function instead of straight lines. It often gives much better accuracy than the trapezoidal rule for smooth functions.

For one panel, Simpson’s rule uses three points. The composite Simpson’s rule requires an even number of subintervals, so let $n$ be even and $h=\frac{b-a}{n}$. The approximation is

$$

S_n = $\frac{h}{3}$$\left[$f(x_0)+4f(x_1)+2f(x_2)+4f(x_3)+$\cdots$+4f(x_{n-1})+f(x_n)$\right]$.

$$

The error bound is

$$

$|E_S| \le \frac{(b-a)}{180}h^4\max_{a\le x\le b}|f^{(4)}(x)|.$

$$

This is a very important result because the power $h^4$ shows that Simpson’s rule can improve very quickly as the partition gets finer. If $h$ is cut in half, the error bound drops by about a factor of $16$.

Example: Simpson’s error idea

Let $f(x)=\sin x$ on $[0,\pi]$. Since the fourth derivative of $\sin x$ is still $\sin x$, we have

$$

$\max_{0\le x\le \pi}|f^{(4)}(x)|=1.$

$$

If students uses $n=4$ subintervals, then $h=\frac{\pi}{4}$. The error bound is

$$

$|E_S| \le \frac{\pi}{180}\left(\frac{\pi}{4}\right)^4.$

$$

This expression gives a guaranteed limit on the error. In many cases, the actual error will be much smaller than the bound, but the bound is still valuable because it provides a mathematical guarantee.

How to apply error bounds in practice

Using an error bound is a procedure, not just a formula. students can think of it as a checklist:

  1. Choose the numerical rule: trapezoidal rule, Simpson’s rule, or another method.
  2. Identify the needed derivative: second derivative for trapezoidal, fourth derivative for Simpson’s.
  3. Find or estimate the maximum derivative size on $[a,b]$.
  4. Plug values into the error bound formula.
  5. Interpret the result as a guaranteed maximum error.

Sometimes the derivative is easy to compute exactly. Other times, students may only have a rough estimate or a graph. In that case, the error bound may be conservative, meaning it is larger than the actual error but still safe.

A useful numerical analysis habit is to compare methods. If the trapezoidal rule and Simpson’s rule give very similar values, that is a sign the approximation may already be accurate. But similarity alone is not a proof. The error bound is the formal guarantee.

Error bounds also help students decide whether the step size is good enough. If the target tolerance is $10^{-6}$ and the current bound is $10^{-4}$, then the mesh must be refined. If the bound is already below $10^{-6}$, then the approximation meets the accuracy requirement.

Connection to composite methods and adaptive quadrature

Error bounds are closely tied to the broader topic of Numerical Integration II because composite methods and adaptive quadrature are both built around controlling error.

In a composite method, the interval $[a,b]$ is divided into many smaller pieces. Smaller $h$ usually means smaller error. Error formulas show exactly how the error should decrease as the subdivision gets finer.

In adaptive quadrature, the interval is split more in places where the function is difficult to approximate. For example, if $f(x)$ changes rapidly near one endpoint but is smooth elsewhere, an adaptive method concentrates more subintervals in the difficult region. The purpose is to keep the error below a chosen tolerance while avoiding unnecessary work.

This is a major reason error bounds are important. They guide the algorithm. Instead of computing with a fixed number of subintervals and hoping for the best, students can use error estimates to decide where more refinement is needed.

Conclusion

Error bounds are the bridge between approximation and reliability in numerical integration. They tell students not only what a numerical method computes, but also how trustworthy that computation is. For the trapezoidal rule, the error is controlled by $h^2$ and the second derivative. For Simpson’s rule, the error is controlled by $h^4$ and the fourth derivative. These bounds explain why finer partitions and smoother functions usually lead to better results.

In Numerical Integration II, error bounds connect composite methods to adaptive quadrature and show how numerical analysis turns approximations into dependable tools. By understanding error bounds, students can choose methods wisely, judge accuracy, and make informed decisions in science, engineering, and mathematics 🌟.

Study Notes

  • Error bounds give a guaranteed upper limit for the difference between the true integral and a numerical approximation.
  • The error is $E=\text{true value}-\text{approximation}$, and we usually study $|E|$.
  • The composite trapezoidal rule has error bound $|E_T| \le \frac{(b-a)}{12}h^2\max_{a\le x\le b}|f''(x)|$.
  • The composite Simpson’s rule has error bound $|E_S| \le \frac{(b-a)}{180}h^4\max_{a\le x\le b}|f^{(4)}(x)|$.
  • Smaller step size $h$ usually means smaller error; for trapezoidal rule the bound behaves like $h^2$, and for Simpson’s rule like $h^4$.
  • Derivative information matters because curvature and higher-order shape affect accuracy.
  • Error bounds are essential in composite methods because they show how accuracy changes when the interval is split into smaller pieces.
  • Adaptive quadrature uses error estimates to decide where to refine the partition.
  • A bound is a guarantee, not necessarily the actual error.
  • In real applications, error bounds help determine whether a numerical answer is accurate enough for use.

Practice Quiz

5 questions to test your understanding