An Introduction to Root-Finding

A root is a solution to the given equation at hand. Root-finding is used wherever data is involved – engineering, finance, statistics, biomedical science – any real-world scenario where an approximation of a solution is needed. 

Now that you understand how polynomials are reduced, we can move on to the more intricate details of how the roots are found in the first place.

If you are familiar with the basics of data structures and algorithms, you would probably have come across a search algorithm known as binary search. The bisection method follows the same principles, such as:

  1. Having a pre-defined interval consisting of an upper and lower bound.
  2. Splitting the interval into two halves
  3. Comparing the midpoint against the target value (or root in this case) to decide which half contains it.

One notable difference of the bisection method would be the continuous values of numerical roots in contrast to binary search which assumes the discrete values of indexed items. Thus, the algorithm would have to be terminated either once a certain threshold has been reached or after a set number of iterations.

Explanation

Figure 3.1: Graph of y=sin(x)+3/(4x)

Above is the graph of f(x)=sin(x)+3/(4x) . Given that there is no known analytical method of finding its roots, a numerical method will have to be used instead.

Part 1 – Determining the existence of a root

There are three conditions to determine the existence of a root numerically:

  1. The function has to be continuous over the domain in question. Analytically, we know that f is a continuous function for the domain 2.5 \leq x \leq 4.5 , so this holds.
  2. The function itself has to be known to calculate the upper and lower bounds. This condition is also fulfilled as we also know that f(x)=sin(x)+3/(4x)
  3. The upper and lower bounds need to be of opposite signs. As f(2.5)=0.898 and f(4.5)=-0.811<br />, the final condition is satisfied.

Therefore, we can conclude that a root exists in the domain D_f=[2.5,4.5] over the function f

Part 2 – Calculating the root

This is the iterative part of the process. Given the lower and upper bounds of f(2.5) and f(4.5) , the bisection method can be outlined as:

  1. Select values for the upper and lower bounds, l = 2.5, u = 4.5 .

  2. Calculate the midpoint p=(l+u)/2 .

  3. If f(p) = 0 , then the root is found, x=p . Stop.

  4. Else f(p) \neq 0
    a. If f(p) \cdot f(l) > 0 , l = p , u = u
    b. If f(p) \cdot f(u) > 0 , l = l , u = p

  5. Repeat steps 2 to 4 until a pre-specified tolerance is satisfied or the maximum number of iterations is reached.

Figure 3.2

Based on the analytical method, there is a root at some point between x=2.5 and x=4.5 . We can label the lower bound l=2.5 and upper bound u=4.5 as shown in Figure 3.2 above.

Figure 3.3: Finding the midpoint

Next, we calculate the midpoint of l and u , p=(l+u)/2=3.5 .

 

Figure 3.4

As f(p)*f(u)>0 , u=p=3.5 .

Figure 3.5

The process then repeats with p=(l+u)/2=3 , generating the following set of values with each iteration:

Iterationlup
12.54.53.5
22.53.53
333.53.25
43.253.53.375
53.253.3753.3125
63.31253.3753.34375
73.343753.3753.359375
83.3593753.3753.367188
93.3593753.3671883.363281
103.3632813.3671883.365234
113.3652343.3671883.366211

Efficiency

Notice that it took 11 iterations for the method to get an estimate that is correct to 3 decimal places, 3.366.

Although the Bisection Method is very simple and robust, it is not very efficient as the absolute error is halved with each iteration. As such, the method converges linearly which makes it slower compared to other methods (like the Newton-Rhapson Method).

Below is the iterative part of the bisection method implemented from first principles using kotlin:

				
					import kotlin.math.sin as sin

fun testFunction(x: Double):Double{
    return sin(x)+3/(4*x)
}

fun bisectionMethod(myFunction: (x: Double) -&gt; Double, lowerBound: Double, upperBound: Double, maxMargin: Double, maxIterations: Int): Double{
    var lower: Double = lowerBound
    var upper: Double = upperBound
    if (myFunction(lower)*myFunction(upper)&gt;0.0){
        return 0.0  // lower and upper bounds have the same sign
    }
    var mid: Double=(lower+upper)/2
    var iterations = 0
    while (upper-lower&gt;=maxMargin &amp;&amp; iterations0){
            upper = mid
        }
        else{
            lower = mid
        }
        iterations+=1
    }
    return mid
}

bisectionMethod(::testFunction, 2.5, 4.5, 0.001, 99)
				
			

Output:

In S2, the bisection method is implemented as the following:

				
					UnivariateRealFunction f = new AbstractUnivariateRealFunction() {
 @Override
 public double evaluate(double x) {
 return x * Math.sin(x) - 3; // x * six(x) - 3 = 0
 }
};
BisectionRoot solver = new BisectionRoot(1e-8, 30);
double root = solver.solve(f, 12., 14.);
double fx = f.evaluate(root);
System.out.println(String.format("f(%f) = %f", root, fx));