1. Numerical Error and Computation

Propagation Of Error

Propagation of Error

students, imagine measuring the length of a desk with a ruler that is only accurate to the nearest millimeter. Even if you read the ruler carefully, your answer still has a little uncertainty. Now imagine using that measurement to find the area of the desk or the speed of a moving object. The original small uncertainty can grow, shrink, or change shape as you do the calculations. That process is called propagation of error πŸ“

In this lesson, you will learn how errors move through calculations, why some formulas are more sensitive than others, and how to estimate the effect of small input mistakes on the final result. By the end, you should be able to explain the main ideas, apply simple error-propagation rules, and connect this topic to the broader study of numerical error and computation.

What propagation of error means

Propagation of error describes how uncertainty in input values affects the uncertainty in a result. In numerical analysis, the inputs may come from measurement, rounding, or approximations. If the inputs are not exact, then the computed output is usually not exact either.

Suppose a quantity is computed from several measured values. If each measured value has a small error, the final answer can be influenced by all of them. The key question is not only how big the input errors are, but also how the formula combines them.

For example, if a rectangle has length $L$ and width $W$, then its area is $A = LW$. If $L$ and $W$ each have small errors, then the area error depends on both measurements. A small change in either variable changes the product. This is a simple example of error propagation.

A helpful idea is that the output error depends on sensitivity. A formula is sensitive if a tiny change in input causes a noticeable change in output. Sensitivity is one reason numerical analysis cares so much about error propagation πŸ”

Absolute error, relative error, and why they matter

To study propagation of error, it helps to know two basic error measures.

The absolute error in an approximation $\hat{x}$ of a true value $x$ is

$$|\hat{x} - x|$$

This tells you how far the approximation is from the true value in the original units.

The relative error is

$$\frac{|\hat{x} - x|}{|x|}$$

when $x \neq 0$. This compares the size of the error to the size of the true value.

Why does this matter for propagation? Because some formulas are better understood in absolute terms, while others are better understood in relative terms. For multiplication and division, relative error is often especially useful. For addition and subtraction, absolute error is often more natural.

For example, if a measurement of $100.0$ meters has an absolute error of $0.1$ meters, the relative error is

$$\frac{0.1}{100.0} = 0.001 = 0.1\%$$

But if a measurement of $2.0$ meters has the same absolute error of $0.1$ meters, the relative error is

$$\frac{0.1}{2.0} = 0.05 = 5\%$$

So the same absolute error can matter very differently depending on the size of the quantity. This idea is central to understanding how errors propagate.

How errors spread in common operations

Let’s look at the main patterns of propagation in basic calculations.

1. Addition and subtraction

If

$$z = x \pm y,$$

then the absolute error in $z$ is roughly related to the absolute errors in $x$ and $y$.

If $x$ has error at most $\Delta x$ and $y$ has error at most $\Delta y$, then a simple estimate is

$$\Delta z \leq \Delta x + \Delta y$$

This means errors can add up. Subtraction has a special risk called cancellation. If two nearly equal numbers are subtracted, the result may be small, and the relative error can become large.

Example: suppose

$$x = 1000.1, \quad y = 1000.0$$

Then

$$x - y = 0.1$$

If each value has a small measurement error, the difference may be much less reliable than either original value. This is important in numerical computation because subtracting nearly equal numbers can lose meaningful digits.

2. Multiplication and division

If

$$z = xy,$$

then relative errors often combine approximately by addition for small errors.

A rough estimate is

$$\frac{\Delta z}{|z|} \approx \frac{\Delta x}{|x|} + \frac{\Delta y}{|y|}$$

Similarly, if

$$z = \frac{x}{y},$$

then

$$\frac{\Delta z}{|z|} \approx \frac{\Delta x}{|x|} + \frac{\Delta y}{|y|}$$

This says that multiplication and division usually preserve relative accuracy better than subtraction does.

Example: if a length is measured as $5.0$ cm with relative error $1\%$, and a width is measured as $3.0$ cm with relative error $2\%$, then the area $A = LW$ has relative error of about

$$1\% + 2\% = 3\%$$

So the product is still fairly accurate, but the uncertainty increases slightly.

3. Powers and roots

If

$$z = x^n,$$

then small relative errors in $x$ are amplified by about a factor of $|n|$:

$$\frac{\Delta z}{|z|} \approx |n| \frac{\Delta x}{|x|}$$

This means powers can magnify uncertainty.

Example: if $x$ has a relative error of $1\%$ and $z = x^3$, then the relative error in $z$ is about $3\%$.

For square roots,

$$z = \sqrt{x},$$

the relative error is about half the relative error in $x$:

$$\frac{\Delta z}{|z|} \approx \frac{1}{2} \frac{\Delta x}{|x|}$$

So roots can reduce relative error.

Using derivatives to estimate error propagation

A powerful numerical analysis tool is the derivative. If a quantity $y$ depends on $x$ through a function

$$y = f(x),$$

then a small change $\Delta x$ causes a change in $y$ of approximately

$$\Delta y \approx f'(x)\Delta x$$

This comes from linear approximation. The derivative tells us how sensitive the function is near a point.

If the input has error, then the output error can be estimated using the derivative. For several variables,

$$y = f(x_1, x_2, \dots, x_n),$$

a small error estimate is

$$\Delta y \approx \left|\frac{\partial f}{\partial x_1}\right|\Delta x_1 + \left|\frac{\partial f}{\partial x_2}\right|\Delta x_2 + \cdots + \left|\frac{\partial f}{\partial x_n}\right|\Delta x_n$$

This formula shows that each input contributes according to how sensitive the function is to that input.

Example: let

$$f(x,y) = x^2y.$$

Then the partial derivatives are

$$\frac{\partial f}{\partial x} = 2xy, \quad \frac{\partial f}{\partial y} = x^2$$

If $x$ and $y$ both have small errors, then the final error depends on both of these sensitivity factors. If $x$ is large, the term $2xy$ may be large, so small uncertainty in $x$ can have a stronger effect.

This derivative-based method is one of the main tools used in propagation of error because it works for many formulas, not just simple arithmetic.

A real-world example: measuring a circular garden 🌿

Suppose students, you measure the radius of a circular garden as

$$r = 4.0 \text{ m}$$

with an uncertainty of

$$\Delta r = 0.1 \text{ m}$$

The area is

$$A = \pi r^2$$

To estimate how error propagates, use the derivative:

$$\frac{dA}{dr} = 2\pi r$$

At $r = 4.0$ m,

$$\frac{dA}{dr} = 8\pi$$

So the area error is approximately

$$\Delta A \approx 8\pi(0.1) = 0.8\pi \approx 2.51 \text{ m}^2$$

The estimated area is

$$A = \pi(4.0)^2 = 16\pi \approx 50.27 \text{ m}^2$$

So the result is roughly

$$50.27 \pm 2.51 \text{ m}^2$$

This example shows that a small uncertainty in radius becomes a larger uncertainty in area because the area depends on $r^2$. That is propagation of error in action.

Why propagation of error matters in computation

In numerical computation, every calculation can introduce error. Sources include rounding, truncation, and measured data. When many operations are combined, the final output may differ from the true value more than expected.

Propagation of error helps answer important questions such as:

  • Which part of the formula causes the biggest uncertainty?
  • Does the method amplify small mistakes?
  • Is the algorithm stable enough for reliable computation?

This topic is closely connected to floating-point arithmetic. Computers store many numbers using limited precision, so even simple operations can introduce tiny rounding errors. Over many steps, these errors can propagate through the computation. That is why numerical analysis studies not just the answer, but how the answer is produced.

Propagation of error also helps explain why two algorithms for the same problem can have different accuracy. One algorithm may keep errors small, while another may magnify them. Good numerical methods are designed to control this growth.

Conclusion

Propagation of error is the study of how uncertainty in input values affects the uncertainty in a computed result. students, the main idea is that errors do not stay isolated; they move through formulas according to the operations involved and the sensitivity of the function. Absolute error is useful for addition and subtraction, relative error is useful for multiplication and division, and derivatives help estimate the effect of small changes in general functions.

This topic fits directly into numerical error and computation because computers and measurements rarely give exact values. Understanding propagation of error helps you judge whether a result is reliable, identify risky steps in a calculation, and choose better methods when accuracy matters. It is a core part of numerical analysis and a practical tool in science, engineering, and everyday problem-solving βœ…

Study Notes

  • Propagation of error describes how input uncertainty affects the final result.
  • Absolute error is $|\hat{x} - x|$.
  • Relative error is $\frac{|\hat{x} - x|}{|x|}$ for $x \neq 0$.
  • In addition and subtraction, absolute errors tend to combine.
  • In multiplication and division, relative errors tend to combine.
  • Subtracting nearly equal numbers can cause cancellation and loss of accuracy.
  • For $y = f(x)$, a small change gives $\Delta y \approx f'(x)\Delta x$.
  • For several variables, partial derivatives estimate how each input error contributes.
  • Powers can amplify relative error; roots can reduce it.
  • Propagation of error is important in floating-point computation because rounding errors can spread through many steps.
  • The topic helps evaluate the reliability and stability of numerical methods.

Practice Quiz

5 questions to test your understanding