Approximation Error in Numerical Methods I
students, in engineering computation, we often want answers to problems that are too hard to solve exactly. That is where numerical methods come in β¨. Instead of finding a perfect symbolic answer, we build an approximation that is close enough for practical use. But how close is close enough? That question leads to approximation error.
In this lesson, you will learn how approximation error helps engineers judge the quality of a numerical answer, compare methods, and decide whether a result is reliable. By the end, you should be able to explain what approximation error means, calculate common forms of error, and connect it to root-finding, interpolation, and curve fitting.
What is approximation error?
Approximation error is the difference between the true value and an approximate value. If the true value is $x_{true}$ and the approximation is $x_{approx}$, then the error is often written as
$$e = x_{true} - x_{approx}$$
The sign tells you whether the approximation is too large or too small. If you only care about size and not direction, you use the absolute error:
$$E_a = \lvert x_{true} - x_{approx} \rvert$$
A related idea is relative error, which compares the error to the size of the true value:
$$E_r = \frac{\lvert x_{true} - x_{approx} \rvert}{\lvert x_{true} \rvert}$$
Relative error is useful because an error of $1$ may be huge if the true value is $2$, but tiny if the true value is $1000$.
Real-world example
Suppose a sensor measures a temperature as $98.6^\circ\text{C}$, but the true value is $99.1^\circ\text{C}$. Then the absolute error is
$$E_a = \lvert 99.1 - 98.6 \rvert = 0.5^\circ\text{C}$$
The relative error is
$$E_r = \frac{0.5}{99.1} \approx 0.00505$$
That is about $0.505\%$. In engineering, that kind of percentage is often more helpful than the raw difference. π
Why approximation error matters in engineering
Engineering problems often involve complicated equations, large data sets, or functions that cannot be solved exactly by hand. Numerical methods give workable answers, but every method introduces some error.
For example:
- In root-finding, we may estimate where a function crosses zero.
- In interpolation, we estimate values between known data points.
- In curve fitting, we choose a model that best matches observed data.
Approximation error helps us answer important questions such as:
- Is the estimate accurate enough for design?
- Does a newer method improve the result?
- Is the error caused by the method itself or by limited data?
An engineer designing a bridge, aircraft part, or control system needs reliable numbers. A small error in one place can become a larger problem later, especially when results are used in other calculations. π§
Types of error used in numerical methods
There are several common ways to describe error in Numerical Methods I.
1. Absolute error
This is the direct difference in size between the true and approximate values:
$$E_a = \lvert x_{true} - x_{approx} \rvert$$
It is easy to understand and useful when the units matter.
2. Relative error
This compares the absolute error to the true value:
$$E_r = \frac{\lvert x_{true} - x_{approx} \rvert}{\lvert x_{true} \rvert}$$
It is usually reported as a decimal or percentage:
$$\text{Percent error} = E_r \times 100\%$$
3. Approximate percent relative error
In iterative methods, the true value is often unknown. Then we compare one estimate to the next one. If the current estimate is $x_{i}$ and the previous estimate is $x_{i-1}$, the approximate percent relative error is
$$\varepsilon_a = \left\lvert \frac{x_i - x_{i-1}}{x_i} \right\rvert \times 100\%$$
This is especially important in methods like the bisection method or Newtonβs method, where a solution is improved step by step. β
Error in root-finding methods
Root-finding methods estimate solutions to equations of the form
$$f(x) = 0$$
Since the exact root may not be easy to find, we use a sequence of approximations.
Example: bisection method
The bisection method repeatedly halves an interval that contains a root. Each step gives a better approximation, but the exact root is usually not reached in a finite number of steps.
Suppose the root is known to lie between $1$ and $2$. After one bisection step, the midpoint is
$$x_1 = \frac{1 + 2}{2} = 1.5$$
If the next midpoint is $x_2 = 1.25$, then the approximate percent relative error is
$$\varepsilon_a = \left\lvert \frac{1.25 - 1.5}{1.25} \right\rvert \times 100\% = 20\%$$
That looks large, but it is expected early in the process. As iterations continue, the error usually gets smaller.
Stopping criteria
In practice, root-finding methods stop when one of these happens:
- the approximate error is small enough,
- the function value $\lvert f(x_i) \rvert$ is small enough,
- or a maximum number of iterations is reached.
This shows how approximation error is not just a formula. It is also a decision tool that tells us when to stop computing. π οΈ
Error in interpolation and curve fitting
Interpolation and curve fitting use data to estimate unknown values, but they do it in different ways.
Interpolation error
Interpolation constructs a function that passes through known data points. If the true function is $f(x)$ and the interpolating approximation is $P(x)$, then the interpolation error at a point $x$ is
$$E(x) = f(x) - P(x)$$
If you use a polynomial to estimate values between sample points, the estimate may be very good near the data points but less accurate farther away.
For example, if a manufacturer records the voltage at specific temperatures, interpolation can estimate the voltage at a temperature between the recorded values. The error depends on how smooth the original function is and how far the point is from the sample points.
Curve fitting error
Curve fitting is different because the model does not have to pass through every data point. Instead, it finds the curve that best matches the data. A common measure is the residual:
$$r_i = y_i - \hat{y}_i$$
where $y_i$ is the observed value and $\hat{y}_i$ is the fitted value.
A smaller residual means the model fits that point better. The total fitting error is often measured by the sum of squared residuals:
$$S = \sum_{i=1}^{n} r_i^2$$
Squaring keeps positive and negative errors from canceling out and gives more weight to larger mistakes.
Example
If observed data values are $y_1 = 10$, $y_2 = 12$, and $y_3 = 13$, while the fitted values are $\hat{y}_1 = 9.5$, $\hat{y}_2 = 12.4$, and $\hat{y}_3 = 12.8$, then the residuals are
$$r_1 = 0.5, \quad r_2 = -0.4, \quad r_3 = 0.2$$
The sum of squared residuals is
$$S = 0.5^2 + (-0.4)^2 + 0.2^2 = 0.25 + 0.16 + 0.04 = 0.45$$
This value helps compare different fitted models.
Sources of approximation error
Approximation error can come from several places.
1. Method error
This happens because the numerical method is an approximation. For example, a low-degree polynomial may not match a very curved function well.
2. Truncation error
Many methods use an infinite mathematical process but stop after a finite number of steps. The missing part creates truncation error. For instance, if a method uses only the first few terms of a series, the omitted terms cause error.
3. Round-off error
Computers store numbers with limited precision. Very small errors in rounding can build up during long calculations. Even if each rounding is tiny, the total effect may matter in sensitive computations.
These three ideas are closely related, but they are not the same. Approximation error often refers to the difference between the exact mathematical quantity and the approximate numerical result, while truncation and round-off error are common causes of that difference.
How to reduce approximation error
Engineers use several strategies to control error:
- Use more accurate methods when needed.
- Increase the number of iterations in iterative methods.
- Use more data points for interpolation or fitting when appropriate.
- Choose a model that matches the behavior of the data.
- Check whether round-off error is affecting the result.
However, more computation is not always better. A more complex model may fit the data too closely and perform poorly on new data. The goal is not zero error, but acceptable error for the task. π―
Connection to the rest of Numerical Methods I
Approximation error is a central idea across Numerical Methods I because it explains the quality of every computed result.
- In root-finding, error tells us how close we are to a solution of $f(x)=0$.
- In interpolation, error shows how well the estimated curve matches the true behavior.
- In curve fitting, error measures how closely a model matches observed data.
Without error analysis, a numerical answer is just a number. With error analysis, that number becomes meaningful and trustworthy. This is why approximation error is not a side topic; it is part of the logic of engineering computation itself.
Conclusion
students, approximation error is the difference between a true value and an approximate one. It can be measured using absolute error, relative error, or approximate percent relative error. In Numerical Methods I, it helps us judge root-finding algorithms, interpolation formulas, and curve-fitting models. It also shows why numerical answers must be checked carefully before they are used in engineering decisions. When you understand approximation error, you are better prepared to choose methods, stop iterations wisely, and explain how reliable a computed result really is. β
Study Notes
- Approximation error compares a true value and an estimated value.
- Absolute error is $E_a = \lvert x_{true} - x_{approx} \rvert$.
- Relative error is $E_r = \frac{\lvert x_{true} - x_{approx} \rvert}{\lvert x_{true} \rvert}$.
- Approximate percent relative error is $\varepsilon_a = \left\lvert \frac{x_i - x_{i-1}}{x_i} \right\rvert \times 100\%$.
- Root-finding methods often stop when the approximate error is small enough.
- Interpolation error is the difference between the true function $f(x)$ and the interpolant $P(x)$.
- Curve fitting uses residuals $r_i = y_i - \hat{y}_i$ to measure mismatch.
- The sum of squared residuals is $S = \sum_{i=1}^{n} r_i^2$.
- Main sources of error include method error, truncation error, and round-off error.
- Approximation error is a key idea across root-finding, interpolation, and curve fitting.
