Bisection Method: A Reliable Way to Find Roots
Welcome, students! In this lesson, you will learn one of the most important root-finding methods in Numerical Analysis: the Bisection method 🔍. The main idea is simple, but powerful: if a function changes sign on an interval, then there is at least one root inside that interval. The Bisection method repeatedly cuts the interval in half until the root is located as closely as needed.
Learning goals
By the end of this lesson, you should be able to:
- explain the key ideas and vocabulary behind the Bisection method,
- use the method to approximate a root step by step,
- understand why the method works,
- describe how the Bisection method fits into Root-Finding I and Numerical Analysis,
- compare its reliability with other root-finding ideas such as fixed-point iteration.
Think of it like searching for a lost item in a hallway 🧭. If you know it is somewhere between two doors, you can keep narrowing the search by checking the middle. That is exactly what the Bisection method does with roots of functions.
What is a root and why do we want one?
A root of a function is a value $x$ where $f(x)=0$. Roots matter in science, engineering, economics, and computer graphics. For example, solving $f(x)=0$ can help find when a rocket reaches a certain height, when a business breaks even, or where two curves meet.
In many real problems, equations are too complicated to solve exactly. That is where Numerical Analysis comes in. Instead of finding a perfect formula, we build a numerical approximation. The Bisection method is one of the safest ways to do this because it uses only function values and sign changes.
The key idea behind sign changes is this: if a function is continuous on an interval $[a,b]$ and $f(a)$ and $f(b)$ have opposite signs, then by the Intermediate Value Theorem there is at least one root in $[a,b]$. In symbols, if
$$f(a)f(b)<0,$$
then there exists at least one $c\in(a,b)$ such that
$$f(c)=0.$$
This theorem is the mathematical reason the Bisection method works.
The main idea of the Bisection method
The word bisection means “cut into two equal parts.” Suppose you have an interval $[a,b]$ and know that a root lies inside it. The method works like this:
- Find the midpoint
$$m=\frac{a+b}{2}.$$
- Evaluate the function at the midpoint, $f(m)$.
- Decide which half contains the root:
- if $f(a)f(m)<0$, the root is in $[a,m]$,
- if $f(m)f(b)<0$, the root is in $[m,b]$,
- if $f(m)=0$, then $m$ is exactly the root.
- Replace the old interval with the half that contains the root.
- Repeat.
Each step halves the interval length. That means the root is being trapped more and more tightly between two endpoints. This makes the method very dependable, even though it may not be the fastest method available.
A big advantage is that the method does not need derivatives or advanced formulas. It only needs continuity and a sign change. That makes it especially useful when other methods fail or are difficult to apply.
A worked example
Let us approximate a root of
$$f(x)=x^3-x-2$$
on the interval $[1,2]$.
First, check the endpoints:
$$f(1)=1-1-2=-2,$$
$$f(2)=8-2-2=4.$$
Since
$$f(1)f(2)<0,$$
there is at least one root in $[1,2]$.
Step 1
Find the midpoint:
$$m_1=\frac{1+2}{2}=1.5.$$
Evaluate:
$$f(1.5)=1.5^3-1.5-2=3.375-3.5=-0.125.$$
Because
$$f(1.5)f(2)<0,$$
the root lies in $[1.5,2]$.
Step 2
New midpoint:
$$m_2=\frac{1.5+2}{2}=1.75.$$
Evaluate:
$$f(1.75)=1.75^3-1.75-2=5.359375-3.75=1.609375.$$
Now
$$f(1.5)f(1.75)<0,$$
so the root lies in $[1.5,1.75]$.
Step 3
New midpoint:
$$m_3=\frac{1.5+1.75}{2}=1.625.$$
Evaluate:
$$f(1.625)=1.625^3-1.625-2\approx 4.291016-3.625=0.666016.$$
Because
$$f(1.5)f(1.625)<0,$$
the root lies in $[1.5,1.625]$.
If we continue, the interval keeps shrinking. After enough steps, we get an approximation close to the true root, which is about $1.521$.
This example shows the basic pattern: check the sign, cut the interval, keep the half with the sign change. The process is simple but very effective ✅.
Why the method is reliable
The Bisection method is known for guaranteed convergence under the right conditions. That means if $f$ is continuous on $[a,b]$ and $f(a)f(b)<0$, then the sequence of intervals produced by the method always shrinks around at least one root.
Why is this important? In many numerical methods, success depends on a good initial guess. But the Bisection method starts with an interval, not a guess. As long as the interval is valid, the root stays trapped inside.
The length of the interval after $n$ steps is
$$\frac{b-a}{2^n}.$$
So each step cuts the uncertainty in half. This means the method has a predictable error control. If $m_n$ is the midpoint after $n$ steps, then the root must be within half the interval length from that midpoint, so the error is at most
$$\frac{b-a}{2^{n+1}}.$$
This makes the method easy to use when accuracy requirements are known in advance.
Stopping rules and accuracy
In practice, you do not repeat forever. You stop when the approximation is good enough. Common stopping rules include:
- the interval length is smaller than a chosen tolerance,
- the midpoint value satisfies $|f(m_n)|$ being small enough,
- a fixed number of iterations has been completed.
A tolerance is the allowed error. For example, if you want the root accurate to within $0.001$, you may keep bisecting until the interval is short enough to guarantee that level of accuracy.
Suppose the initial interval is $[a,b]$ and you want the error below $\varepsilon$. Since the error after $n$ steps is at most
$$\frac{b-a}{2^{n+1}},$$
you can choose $n$ so that
$$\frac{b-a}{2^{n+1}}<\varepsilon.$$
This formula tells you how many bisections are needed before you begin.
Strengths, limits, and comparison with other methods
The Bisection method has several important strengths:
- it is simple to understand and implement,
- it is very reliable when the starting interval is valid,
- it gives a clear error bound,
- it does not need derivatives.
But it also has limits:
- it can be slower than other methods,
- it only works when there is a sign change on a continuous interval,
- it finds one root in an interval, not all roots at once.
This connects to the broader topic of Root-Finding I. Other methods, such as fixed-point iteration, may converge faster in some cases, but they can also fail if the function is not set up well. The Bisection method is often used first because it is trustworthy. In some problems, analysts use Bisection to find a good starting point, then switch to a faster method later.
That is a useful strategy in Numerical Analysis: use a safe method first, then a faster method if needed 🚀.
Real-world interpretation
Imagine students is helping calibrate a thermometer. A model might say the true temperature is the root of a function because the measured value minus the target value should equal zero. If the model gives positive error at one point and negative error at another, then a root is trapped between those points. The Bisection method can then narrow down the temperature value step by step.
Another example is engineering design. If a beam must satisfy a load equation, the correct setting may be found by solving $f(x)=0$. Bisection helps when engineers want a dependable numerical answer without making complicated assumptions.
Conclusion
The Bisection method is a foundational tool in Numerical Analysis and Root-Finding I. It uses a simple but rigorous idea: if a continuous function changes sign over an interval, then a root lies inside. By repeatedly halving the interval, the method homes in on that root with guaranteed convergence. Its main strength is reliability, and its main weakness is speed. Even so, it remains one of the most important starting points for understanding numerical root-finding.
Study Notes
- A root is a value $x$ such that $f(x)=0$.
- The Bisection method requires a continuous function on $[a,b]$ with
$$f(a)f(b)<0.$$
- The midpoint is
$$m=\frac{a+b}{2}.$$
- After each step, choose the half-interval where the sign change remains.
- The interval length after $n$ steps is
$$\frac{b-a}{2^n}.$$
- The error after $n$ steps is at most
$$\frac{b-a}{2^{n+1}}.$$
- The method is reliable because it is based on the Intermediate Value Theorem.
- It is slower than some other methods, but it is easy to use and has predictable accuracy.
- In Root-Finding I, Bisection is a fundamental starting method and often a backup for more advanced approaches.
