What if, instead of blindly halving the search space with each iteration, we use the available information specific to the function to help speed up the search?

Linear interpolation converges slightly faster than the bisection method. By approximating the curve with a secant, it allows the use of the overall gradient within each interval to find the next point.

Explanation

Similar to the method of bisection, it follows an iterative approach with the interval [l,u]. After determining the existence of a root, the algorithm for linear interpolation is as follows:

1. Select values for the upper and lower bounds, $l = 2.5, u = 4.5$.

2. Calculate the midpoint $p=u-f(u)\cdot\frac{(l-u)}{f(l)-f(u)}$.

3. If $f(p) = 0$, then the root is found, $latex x=p$. Stop.

4. Else $f(p) \neq 0$
a. If $f(p) \cdot f(l) > 0 , l = p , u = u$
b. If $f(p) \cdot f(u) > 0 , l = l , u = p$

5. Repeat steps 2 to 4 until a pre-specified tolerance is satisfied or the maximum number of iterations is reached.

Worked Example

Above, we have the same graph as utilized in the previous topic, The Bisection Method.

Similarly, we use the same lower bound of $l=2.5$ and upper bound of $u=4.5$.

In Figure 4.3, a secant connecting points $(l, f(l))$ and $(u, f(u))$ is drawn. To derive the formula for finding the x-intercept, p, we first need to find the equation of the secant in its inverse form ($x=y/m + p$).

In general, these are the two formulae for calculating the equation of a straight line:

1. $m = \frac{y_1-y_2}{x_1-x_2}$
2. $x= \frac{y}{m} + p$

By substituting $y$ for $f(x)$, we get the following derivation for $p$:

$p=x-\frac{y}{m}\\=x-y\cdot\frac{1}{m}\\=x-y\cdot\frac{x_1-x_2}{y_1-y_2}\\=x-f(x)\cdot\frac{x_1-x_2}{f(x_1)-f(x_2)}\\=u-f(u)\cdot\frac{l-u}{f(l)-f(u)}$

From Figure 4.3, it is evident that $f(p)<0$, hence the $u=p=3.551$ for the next iteration.

 Iteration l u p 1 2.5 4.5 3.55125307 2 2.5 3.551253 3.37006416 3 2.5 3.370064 3.36626412

Compared to the Bisection Method (covered in the previous topic), the linear interpolation part ofBrent’s Method converges much faster with just 3 iterations required to reach an accuracy of 2 decimal places in this case.

The linear interpolation part of Brent’s Method is implemented in kotlin below:

				
import kotlin.math.sin as sin

fun testFunction(x: Double):Double{
return sin(x)+3/(4*x)
}

fun linearInterpolation(myFunction: (x: Double) -&gt; Double, lowerBound: Double, upperBound: Double, maxMargin: Double, maxIterations: Int): Double{
var lower: Double = lowerBound
var upper: Double = upperBound
if (myFunction(lower)*myFunction(upper)&gt;0.0){
return 0.0  // lower and upper bounds have the same sign, root does not exist within that interval
}
var mid: Double=(lower+upper)/2
var iterations: Int = 0
while (upper-lower&gt;=maxMargin &amp;&amp; iterations0){
upper = mid
}
else{
lower = mid
}
iterations+=1
}
return mid
}

linearInterpolation(::testFunction, 2.5, 4.5, 0.00001, 10)



Output: