2. Root-Finding I

Fixed-point Iteration

Fixed-point Iteration

students, welcome to one of the most useful ideas in root-finding: fixed-point iteration. In numerical analysis, many important equations cannot be solved exactly with simple algebra, so we need methods that give better and better approximations. Fixed-point iteration turns a hard root-finding problem into a repeated guessing process that can be carried out by hand or by computer 💻.

Learning Goals

By the end of this lesson, students, you should be able to:

  • explain what a fixed point is and how fixed-point iteration works,
  • rewrite an equation into the form $x = g(x)$,
  • use the iteration formula $x_{n+1} = g(x_n)$,
  • recognize when the method is likely to converge,
  • connect fixed-point iteration to other root-finding methods in Numerical Analysis.

The big idea is simple: instead of trying to solve $f(x) = 0$ directly, we rewrite the same problem in the form $x = g(x)$ and then repeatedly apply $g$. If the process works, the numbers move closer and closer to a value that does not change when $g$ is applied. That special value is called a fixed point.

What Is a Fixed Point?

A number $p$ is called a fixed point of a function $g$ if it satisfies

$$g(p) = p.$$

This means that when $p$ is put into the function, the output is the same as the input. It stays fixed. 🎯

Fixed points are closely related to roots. If we want to solve

$$f(x) = 0,$$

we may rearrange the equation into

$$x = g(x),$$

which is equivalent to finding a fixed point of $g$.

For example, the equation

$$x^2 - 2 = 0$$

has solutions $x = \pm\sqrt{2}$. One way to rewrite it is

$$x = g(x) = \frac{1}{2}\left(x + \frac{2}{x}\right),$$

which is a form connected to the Babylonian method for square roots. A fixed point of this $g$ gives a solution of the original equation.

This is why fixed-point iteration belongs in Root-Finding I: it is a method for locating the values of $x$ where a function becomes zero, but it does so indirectly.

The Fixed-Point Iteration Procedure

The method begins with an initial guess $x_0$. Then we generate a sequence by repeated substitution:

$$x_{n+1} = g(x_n), \quad n = 0,1,2,\dots$$

Each new value is produced from the previous one. If the sequence converges, it may approach a fixed point $p$.

Here is the basic procedure:

  1. Choose a function $g$ such that solving $x = g(x)$ is equivalent to solving the original equation.
  2. Pick an initial guess $x_0$.
  3. Compute $x_1 = g(x_0)$, then $x_2 = g(x_1)$, and keep going.
  4. Stop when two consecutive values are very close, such as when

$$|x_{n+1} - x_n| < \text{tolerance}.$$

Suppose we want to solve

$$x = \cos x.$$

Then we can use

$$g(x) = \cos x,$$

so the iteration becomes

$$x_{n+1} = \cos(x_n).$$

Starting with $x_0 = 1$, the first few values are approximately

$$x_1 = \cos(1) \approx 0.5403,$$

$$x_2 = \cos(0.5403) \approx 0.8576,$$

$$x_3 = \cos(0.8576) \approx 0.6543.$$

The values move around but start getting closer to a number near $0.7391$. This number is a fixed point because it satisfies $x = \cos x$.

Why the Choice of $g(x)$ Matters

A very important fact in fixed-point iteration is that the same equation can often be rewritten in more than one way, and some versions work much better than others. students, this is where Numerical Analysis becomes practical: not only do we need the correct math, we also need a stable algorithm.

For instance, from

$$x^2 - 2 = 0,$$

we could rewrite the equation as

$$x = \frac{2}{x},$$

or as

$$x = \frac{1}{2}\left(x + \frac{2}{x}\right).$$

Both are equivalent to the original equation in the sense that their fixed points solve it, but they do not behave the same under iteration.

A good $g$ should usually make the sequence move toward the fixed point instead of wandering away. If $g$ is poorly chosen, the iteration may diverge, oscillate, or crash because of division by zero or unstable behavior.

A useful example is the function

$$g(x) = \frac{2}{x}$$

for solving $x^2 = 2$. If we start near $\sqrt{2}$, the sequence may bounce between values above and below the solution. In contrast,

$$g(x) = \frac{1}{2}\left(x + \frac{2}{x}\right)

$$

often converges much more smoothly.

Convergence and the Derivative Test

The speed and success of fixed-point iteration depend on how steep or flat $g$ is near the fixed point. A key result says that if $p$ is a fixed point and

$$|g'(p)| < 1,$$

then the iteration is locally convergent near $p$. This means that starting close enough to $p$ usually leads the sequence toward $p$.

If instead

$$|g'(p)| > 1,$$

then the iteration tends to move away from the fixed point. If

$$|g'(p)| = 1,$$

the test is inconclusive, and more analysis is needed.

Why does this make sense? Imagine zooming in near the fixed point. If $g$ changes the input only a little, then repeated application can settle down. But if $g$ stretches distances, the errors get larger instead of smaller.

There is also a practical interval version: if $g$ maps an interval $[a,b]$ into itself and there is a constant $k < 1$ such that

$$|g(x) - g(y)| \le k|x-y|$$

for all $x$ and $y$ in $[a,b]$, then fixed-point iteration behaves well on that interval. This is connected to the contraction mapping idea.

Example with Error Behavior

Consider the iteration

$$x_{n+1} = \cos(x_n).$$

The fixed point is about $p \approx 0.7391$. Because

$$g'(x) = -\sin x,$$

we get

$$|g'(p)| = |\sin(p)| \approx 0.6736 < 1.$$

So the method converges near the fixed point. The errors get smaller over time, although not perfectly smoothly. This is an example of linear convergence, meaning the error decreases by roughly a constant factor each step.

In many textbooks, the error is written as

$$e_n = x_n - p.$$

When convergence is linear, the sequence of errors shrinks approximately like

$$|e_{n+1}| \approx C|e_n|$$

for some constant $C$ with $0 < C < 1$.

This is slower than methods such as Newton's method, which can converge quadratically under good conditions, but fixed-point iteration is simpler and often easier to implement.

Relation to Bisection and Root-Finding I

students, fixed-point iteration is part of the same root-finding family as the bisection method. The bisection method works by trapping a root inside an interval where the function changes sign, then repeatedly halving the interval. It is reliable, but it uses only the sign of $f(x)$.

Fixed-point iteration is different: it transforms the problem and then generates a sequence of approximations directly. It can be faster than bisection if the function is chosen well, but it is less guaranteed to work.

So in the broader topic of Root-Finding I:

  • Bisection gives dependable bracketing.
  • Fixed-point iteration gives a flexible iterative approach.
  • Both aim to approximate roots of equations numerically.

In real applications, engineers and scientists may choose fixed-point iteration when the equation naturally suggests a rearrangement. For example, equations from physics, finance, or equilibrium problems are often written in iterative form because that matches how the system behaves step by step.

Practical Tips and Common Mistakes

Here are some important points to remember:

  • Not every rearrangement $x = g(x)$ will work well.
  • The starting guess $x_0$ matters a lot.
  • The method may converge to different fixed points depending on where you start.
  • If the sequence begins to oscillate wildly or move farther away, the chosen $g$ is probably not suitable.
  • Always check stopping criteria such as

$$|x_{n+1} - x_n| < \text{tolerance}$$

or

$$|f(x_{n+1})| < \text{tolerance}.$$

A common mistake is to assume that because $x = g(x)$ is mathematically equivalent to $f(x) = 0$, every version of $g$ is equally good. That is not true. Numerical methods depend on both correctness and stability.

Conclusion

Fixed-point iteration is a core idea in Numerical Analysis for solving equations numerically, students. It works by rewriting a root-finding problem into a fixed-point problem and then generating a sequence with

$$x_{n+1} = g(x_n).$$

If the function $g$ is chosen well and the starting guess is close enough to the fixed point, the sequence can converge to a solution. The method is powerful because it is simple and flexible, but success depends strongly on convergence behavior, especially the condition

$$|g'(p)| < 1.$$

Within Root-Finding I, it complements methods like bisection by offering another way to approximate roots with repeated computation.

Study Notes

  • A fixed point $p$ satisfies $g(p) = p$.
  • Root-finding can often be rewritten as solving $x = g(x)$.
  • Fixed-point iteration uses $x_{n+1} = g(x_n)$.
  • The initial guess $x_0$ can strongly affect the outcome.
  • Good choices of $g$ lead to convergence; poor choices may diverge.
  • A useful local test is $|g'(p)| < 1$ for convergence near the fixed point.
  • If $|g'(p)| > 1$, the iteration usually moves away from the fixed point.
  • Fixed-point iteration often has linear convergence.
  • It is simpler than Newton's method but not as fast in many cases.
  • In Root-Finding I, it is a key alternative to the bisection method.
  • Always monitor stopping rules like $|x_{n+1} - x_n|$ or $|f(x_n)|$.
  • Numerical analysis is not just about finding an answer, but about finding it reliably and efficiently.

Practice Quiz

5 questions to test your understanding