Welcome to Numerical Methods!

Hello! You've reached one of the most practical and fascinating chapters in Pure Mathematics: Numerical Methods. Don't worry if this topic feels a bit abstract at first; it's essentially the mathematics that computers and calculators use behind the scenes!

What will we learn? We will learn techniques to find approximate solutions (roots) to equations that are impossible or extremely difficult to solve using standard algebraic methods. For example, try solving \(e^x + 2x^2 = 5\). You can't isolate \(x\) algebraically, so we need numerical methods!

Key takeaway: When algebra fails, Numerical Methods provide accurate, systematic approximations.


Section 1: Locating Roots and the Change of Sign Principle

What is a Root?

The root of an equation \(f(x) = 0\) is the value of \(x\) where the graph of \(y = f(x)\) crosses the \(x\)-axis. At this point, \(y=0\).

1.1 The Change of Sign Principle

This is the most fundamental concept in numerical root finding. It’s simple and intuitive:

If we have a continuous function \(f(x)\) (meaning the graph has no jumps or breaks), and we find two values, \(a\) and \(b\), such that:
1. \(f(a)\) is positive.
2. \(f(b)\) is negative.
...then, because the function must pass from positive territory to negative territory (or vice versa), there must be at least one root between \(a\) and \(b\).

Step-by-Step: Using the Change of Sign
  1. Define \(f(x)\): Ensure your equation is set up as \(f(x) = 0\).
  2. Choose the Interval: Pick two values, \(a\) and \(b\), to form the interval \([a, b]\).
  3. Calculate the Signs: Substitute \(a\) and \(b\) into \(f(x)\).
  4. Verify: If \(f(a)\) and \(f(b)\) have opposite signs, a root is guaranteed within that interval.

Example: To show a root of \(f(x) = x^3 - 4x + 1 = 0\) lies between \(x=0\) and \(x=1\):
\(f(0) = (0)^3 - 4(0) + 1 = 1\) (Positive)
\(f(1) = (1)^3 - 4(1) + 1 = 1 - 4 + 1 = -2\) (Negative)
Since the sign changes, a root exists between 0 and 1.

Accessibility Note: The Change of Sign method leads directly to the Interval Bisection Method, which just repeatedly halves the interval until the root is found to the required accuracy. It's slow but reliable!

Common Mistake Alert!

The Change of Sign method only works if the function is continuous. If there is a discontinuity (like an asymptote), the sign can change without crossing the \(x\)-axis! Always assume continuity for the functions you encounter in P3, unless it is a tangent or a function known to have asymptotes.

Quick Review: Change of Sign

We use this to locate a root. It requires checking if \(f(x)\) changes from positive to negative (or vice versa) across an interval \([a, b]\).


Section 2: Fixed Point Iteration (The Looping Method)

2.1 Understanding Iteration

Instead of finding a root exactly, we start with an initial guess, call it \(x_0\). We then plug \(x_0\) into a specific formula, which gives us a better guess, \(x_1\). We repeat this process (we iterate) until our guesses stop changing much, meaning we have converged to the root.

The general form is:
$$x_{n+1} = g(x_n)$$

Analogy: Think of a self-correcting thermostat. It measures the current temperature (\(x_n\)), applies a rule (\(g\)), and adjusts the setting to get the new temperature (\(x_{n+1}\)), hoping to eventually settle on the target temperature (the root).

2.2 Setting up the Iteration Formula

The challenge is converting \(f(x) = 0\) into \(x = g(x)\). There are often several ways to do this, but only some will converge!

Example: \(x^3 - 2x - 5 = 0\).

Attempt A (Rearrangement 1):
$$x^3 = 2x + 5$$ $$x = \sqrt[3]{2x + 5}$$
This gives the iteration: \(x_{n+1} = \sqrt[3]{2x_n + 5}\).

Attempt B (Rearrangement 2):
$$2x = x^3 - 5$$ $$x = \frac{x^3 - 5}{2}$$
This gives the iteration: \(x_{n+1} = \frac{x_n^3 - 5}{2}\).

Key Idea: If the iteration converges, \(x_{n+1}\) will eventually become almost equal to \(x_n\). When \(x_{n+1} = x_n\), we get \(x = g(x)\), which solves the original equation!

2.3 Convergence: Staircase and Cobweb Diagrams

When you plot \(y = x\) and \(y = g(x)\), the root is the point of intersection. Iteration describes a path on this graph:

  • Start at \(x_0\) on the \(x\)-axis.
  • Move vertically to the curve \(y = g(x)\). This is \(x_1\).
  • Move horizontally to the line \(y = x\). (This is the crucial step, as it sets \(x_1\) as the input for the next step).
  • Repeat (vertically back to \(y=g(x)\), horizontally back to \(y=x\)).

Did You Know?
The path created is called a staircase diagram if the root is approached monotonically (always increasing or always decreasing), and a cobweb diagram if the sequence oscillates around the root.

The iteration will converge if, at the root, the absolute value of the gradient of \(g(x)\) is less than 1.
$$|\,g'(x)\;| < 1$$

If \(|\,g'(x)\;| > 1\), the iteration will diverge (spiral or step away from the root).

Memory Aid for Convergence:

Imagine the root as the bottom of a bowl. If the function \(g(x)\) is too steep (gradient > 1), your starting value rolls away. If the function is shallow (gradient < 1), your value rolls down into the root.


Section 3: The Newton-Raphson Method

The Fixed Point Iteration method (Section 2) can be slow or might fail if we choose a bad rearrangement. The Newton-Raphson method is often much faster and more powerful.

3.1 The Newton-Raphson Formula

This method uses the tangent to the curve \(y = f(x)\) at the current guess, \(x_n\), to predict a better guess, \(x_{n+1}\).

The formula is:

$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$

Prerequisite Check: Notice that this method requires you to find the derivative, \(f'(x)\)!

3.2 How it Works (The Geometry)

The formula is derived from the tangent line approximation. If you take a guess \(x_n\), the tangent line at that point is calculated. The point where this tangent line crosses the \(x\)-axis is our next guess, \(x_{n+1}\). Because the tangent is a good local approximation of the curve, \(x_{n+1}\) is usually very close to the actual root.

Step-by-Step Calculation

Suppose we want to find a root for \(f(x) = x^2 - 3\). (We know the answer is \(\sqrt{3}\) \(\approx\) 1.732). Let's use \(x_0 = 2\).

  1. Find \(f(x)\) and \(f'(x)\):
    \(f(x) = x^2 - 3\)
    \(f'(x) = 2x\)
  2. Set up the iterative formula:
    $$x_{n+1} = x_n - \frac{x_n^2 - 3}{2x_n}$$
  3. Calculate \(x_1\) (starting with \(x_0 = 2\)):
    $$x_1 = 2 - \frac{(2)^2 - 3}{2(2)} = 2 - \frac{1}{4} = 1.75$$
  4. Calculate \(x_2\):
    $$x_2 = 1.75 - \frac{(1.75)^2 - 3}{2(1.75)} \approx 1.73214$$

    Notice how quickly we converged! \(x_2\) is already accurate to 4 decimal places. This speed is the major advantage of Newton-Raphson.

3.3 Failures and Limitations

While fast, Newton-Raphson can fail in specific situations:

  • When \(f'(x_n) \approx 0\): If the tangent is nearly horizontal near the guess \(x_n\), the denominator \(f'(x_n)\) becomes very small. This results in the fraction becoming huge, pushing \(x_{n+1}\) far away from the root. Geometrically, a nearly flat tangent line crosses the x-axis far away.
  • Turning Points: If your initial guess \(x_0\) is near a local maximum or minimum (a turning point), the method may try to converge to a distant root, or fail entirely.

Key Takeaway for Newton-Raphson: It's the fastest method, but you need a reasonably close initial guess and must avoid points where the curve is flat (where \(f'(x)=0\)).


Section 4: Summary of Methods and Accuracy

4.1 Proving the Accuracy of a Root

After using an iterative method, you will be asked to prove that the root lies within a given interval, often specified to a certain number of decimal places (d.p.).

If you are asked to show a root \(\alpha\) is \(1.732\) to 3 d.p.:

You must test the lower bound and the upper bound of the interval that rounds to that value:

  • Lower bound: \(1.7315\)
  • Upper bound: \(1.7325\)

You must use the Change of Sign Principle on the original function, \(f(x)\).

1. Calculate \(f(1.7315)\). Show it is one sign (e.g., negative). 2. Calculate \(f(1.7325)\). Show it is the opposite sign (e.g., positive).

Since there is a change of sign over this interval, the actual root \(\alpha\) must lie between 1.7315 and 1.7325. Everything in this interval rounds to 1.732 (3 d.p.). Q.E.D.

4.2 Choosing the Right Tool

The numerical method you choose depends on the requirements of the problem.

Method Purpose Speed/Efficiency Requirements/Notes
Change of Sign / Bisection Locating and proving a root exists in an interval. Slow but 100% reliable. Function must be continuous.
Fixed Point Iteration
(\(x_{n+1}=g(x_n)\))
Calculating a root iteratively. Moderate. Convergence is not guaranteed. Requires algebraic rearrangement. Works only if \(|\,g'(x)\;| < 1\) near the root.
Newton-Raphson Calculating a root quickly. Very fast convergence (usually). Requires the derivative \(f'(x)\). Fails if \(f'(x) \approx 0\).

Final Encouragement: Numerical Methods is highly procedural. If you master the formulas and practice setting up the iteration processes correctly, you will find this chapter very rewarding. Good luck!