7. Numerical Integration II

Adaptive Quadrature Overview

Adaptive Quadrature Overview

students, in numerical integration, one of the biggest goals is to approximate a definite integral like $\int_a^b f(x)\,dx$ accurately without using too many function evaluations. Adaptive quadrature is a smart strategy for doing this efficiently. Instead of using the same step size everywhere, it automatically changes the size of the subintervals based on how difficult the function looks in each region. This is especially useful when the graph of $f(x)$ changes rapidly in some places and slowly in others 📈.

Introduction: Why Adaptive Quadrature Matters

The main idea behind adaptive quadrature is simple: use more effort where the function is hard to approximate and less effort where it is smooth. For example, imagine estimating the area under a road that is mostly flat but has a steep hill in one small section. A fixed-step method might waste many measurements on the flat parts and still miss the steep hill. Adaptive quadrature focuses resources where they matter most.

By the end of this lesson, students, you should be able to:

  • explain the key ideas and vocabulary of adaptive quadrature,
  • describe how error estimates guide interval splitting,
  • connect adaptive quadrature to composite numerical integration methods,
  • recognize why adaptive methods are often more efficient than uniform-step methods,
  • use examples to understand how adaptive quadrature works in practice.

Adaptive quadrature belongs to the larger topic of Numerical Integration II because it builds on the composite rules you already know, such as the trapezoidal rule and Simpson’s rule. The difference is that adaptive methods do not treat every subinterval equally. They react to the behavior of the function.

Core Idea: Let the Error Decide Where to Refine

A numerical integration method produces an approximation $A$ to the exact integral $I = \int_a^b f(x)\,dx$. The error is the difference $E = I - A$. Since the exact value $I$ is usually unknown, adaptive quadrature uses an estimate of the error instead of the true error.

The most common workflow is:

  1. Approximate the integral on $[a,b]$ using a rule such as Simpson’s rule.
  2. Split the interval into two halves, $[a,m]$ and $[m,b]$, where $m = \frac{a+b}{2}$.
  3. Approximate the integral on each half.
  4. Compare the one-interval estimate with the two-half estimate.
  5. If the difference is too large, subdivide again.

This process continues until the estimated error is below a chosen tolerance. The tolerance is a target accuracy set by the user, often written as $\text{tol}$. If the estimated error is less than $\text{tol}$, the algorithm accepts that part of the interval.

A key advantage is efficiency ✨. If $f(x)$ is smooth on one part of the domain, adaptive quadrature may accept a large interval there. If $f(x)$ bends sharply, oscillates, or has a peak, the method divides that region into smaller pieces.

Error Estimation and the Simpson’s Rule Idea

Adaptive quadrature is often built from Simpson’s rule because Simpson’s rule is accurate for smooth functions and gives a useful built-in error estimate. On an interval $[a,b]$, Simpson’s rule uses the values at $a$, $m$, and $b$, where $m = \frac{a+b}{2}$.

A standard Simpson approximation is

$$

S(a,b) = $\frac{b-a}{6}$$\Big($f(a) + 4f$\big($$\tfrac{a+b}{2}$$\big)$ + f(b)$\Big)$.

$$

Now suppose we compute $S(a,b)$ and also compute Simpson’s rule on the two halves, $S(a,m)$ and $S(m,b)$. Then the difference between $S(a,b)$ and $S(a,m) + S(m,b)$ gives information about the local error. If the difference is large, the interval is probably not smooth enough for a single Simpson estimate.

A common practical estimate is based on the fact that Simpson’s rule has error proportional to $h^5$ on a subinterval of width $h$, so halving the interval changes the error in a predictable way. In practice, the algorithm compares the coarse estimate and the refined estimate and uses that difference to decide whether to stop or subdivide again.

For example, if $S(a,b)$ and $S(a,m)+S(m,b)$ are very close, then the function is likely smooth enough on $[a,b]$ and the method can accept the answer. If they are far apart, the function may be changing too quickly and the interval should be split further.

How Adaptive Quadrature Works Step by Step

Let’s describe the process in a clear way using the interval $[a,b]$.

Step 1: Start with a whole interval

The algorithm begins with the entire domain $[a,b]$. It calculates a numerical approximation using a base rule, often Simpson’s rule.

Step 2: Estimate the local error

The algorithm also computes the approximation on the left half and right half. Then it compares the results. The difference acts as an error indicator.

Step 3: Compare with tolerance

If the estimated error on $[a,b]$ is less than the tolerance $\text{tol}$, the interval is accepted.

Step 4: Split difficult intervals

If the error estimate is too large, the interval is divided into $[a,m]$ and $[m,b]$. Each new interval is checked separately.

Step 5: Repeat recursively

This process continues recursively until every accepted subinterval meets the accuracy goal.

This recursive strategy is what makes adaptive quadrature powerful. The method keeps “zooming in” only where the function demands more attention 🔍.

A real-world analogy is editing a photo. You do not need to sharpen every pixel equally. You focus on the parts where the image is blurry. Adaptive quadrature does something similar with interval size.

Example: Smooth Region and Sharp Change

Suppose students wants to approximate the area under a function that is mostly smooth but has a narrow spike near one point. A fixed composite rule with equal subintervals might need many intervals everywhere just to capture the spike. That wastes work on the smooth parts.

Adaptive quadrature would likely behave differently:

  • on smooth regions, it would accept larger subintervals,
  • near the spike, it would split the interval many times,
  • the final partition would be nonuniform, with smaller intervals near the difficult region.

This behavior is especially useful for functions with local features such as:

  • sharp peaks,
  • rapid oscillations,
  • corners or kinks,
  • boundary layers in applied problems,
  • regions where the function changes curvature quickly.

For a smooth polynomial-like function, adaptive quadrature may finish quickly because the error estimate becomes small almost immediately. For a function with a logarithmic singularity or a highly oscillatory pattern, the method may need many subdivisions near the problematic region.

Relationship to Composite Numerical Integration

Adaptive quadrature is closely related to composite numerical integration, which means applying a rule on many subintervals and adding the results. In composite trapezoidal or composite Simpson’s rule, the subintervals are usually equally spaced. The number of intervals is chosen in advance.

Adaptive quadrature differs in two major ways:

  1. The subintervals are not all the same length.
  2. The number of subintervals is not fixed in advance.

Instead of deciding beforehand to use, for example, $n = 100$ subintervals, the adaptive method starts small and grows only where necessary. That is why it often achieves a given accuracy with fewer function evaluations than a uniform grid would need.

This makes adaptive quadrature a practical example of algorithmic efficiency in Numerical Analysis. It uses information from the function itself to guide computation.

Important Terms and Concepts

students, here are the key ideas you should know:

  • Quadrature: a numerical method for approximating an integral.
  • Adaptive: changing automatically based on the behavior of the function.
  • Subinterval: a smaller interval within $[a,b]$.
  • Tolerance: the allowed error level, often written as $\text{tol}$.
  • Local error estimate: an estimate of the error on one subinterval.
  • Recursion: repeating the same process on smaller and smaller intervals.
  • Refinement: splitting an interval to increase accuracy.
  • Nonuniform mesh: a partition of $[a,b]$ with unequal subinterval lengths.

Understanding these terms helps you interpret adaptive algorithms and compare them with fixed-step methods.

Limitations and Practical Considerations

Adaptive quadrature is powerful, but it is not magical. It still depends on good error estimates, and some functions are challenging for any method. For example, if a function is very oscillatory, highly discontinuous, or has singular behavior, the algorithm may need many subdivisions and many evaluations.

There is also a tradeoff between accuracy and cost. Lower tolerance usually means more subdivisions and more computation. Higher tolerance gives faster results but less accuracy. Choosing a tolerance is therefore part of the modeling decision.

Another practical issue is stopping criteria. The algorithm must decide when the estimated error is small enough. In software, there are often limits on the maximum recursion depth or the minimum allowed subinterval width to prevent endless subdivision.

Conclusion

Adaptive quadrature is a flexible and efficient way to approximate definite integrals. It improves on standard composite methods by changing subinterval sizes according to the behavior of the function. When the function is smooth, it uses large intervals. When the function is difficult, it refines the partition until the estimated error is acceptable.

In Numerical Integration II, this topic connects error bounds, composite rules, and algorithm design. The big idea is that numerical methods should not waste work where it is unnecessary. Adaptive quadrature uses error estimates to concentrate effort where the integral needs it most. That makes it one of the most practical tools in numerical integration ✅.

Study Notes

  • Adaptive quadrature approximates $\int_a^b f(x)\,dx$ by adjusting subinterval sizes automatically.
  • It uses an error estimate to decide whether to accept an interval or split it.
  • A common base rule is Simpson’s rule: $S(a,b) = \frac{b-a}{6}\Big(f(a) + 4f\big(\tfrac{a+b}{2}\big) + f(b)\Big)$.
  • The method is recursive: difficult intervals are subdivided again and again until the estimated error is below $\text{tol}$.
  • Adaptive quadrature is efficient because it spends more work where $f(x)$ is hard to approximate and less where it is smooth.
  • It is closely related to composite numerical methods, but it uses a nonuniform partition instead of equal subintervals.
  • Typical difficult features include sharp peaks, oscillations, kinks, and singular behavior.
  • In Numerical Analysis, adaptive quadrature shows how error estimation guides computation and improves efficiency.

Practice Quiz

5 questions to test your understanding

Adaptive Quadrature Overview — Numerical Analysis | A-Warded