The Newton-Raphson Method is perhaps the simplest and most efficient method of root-finding except for one compromise: it requires a reasonably accurate initial estimate of the root.

Although this method may seem similar to the secant method as it uses straight lines to approximate the function, the way it derives the root is very different – it uses the tangent at the estimate instead of the secant, relying on the current estimate and its derivative function rather than an interval.

**The math**

For an equation 𝑓(𝑥) = 0, we choose 𝑥𝑖 as an initial approximation of the root or the estimate from the last iteration. Then through the point to get the tangent L of the curve.

Hence, the next iteration can be found by:

**Example**

Above shows the graph of the function .

The derivative of the function can be calculated as such:

Thus, the tangent to the point is:

` ````
```import kotlin.math.sin as sin
import kotlin.math.cos as cos
fun testFunction(x: Double):Double{
return sin(x)+x/10
}
fun testFunctionDX(x: Double):Double{
return cos(x)+1/10
}
fun NewtonRaphson(myFunction: (x: Double) -> Double, myFunctionDX: (x: Double) -> Double, x: Double, maxIterations: Int): Double{
var x: Double = x
var iterations: Int = 0
while (iterations<maxIterations){
var next_x = x - myFunction(x)/myFunctionDX(x)
print(next_x)
print("\n")
x = next_x
iterations+=1
}
return x
}
NewtonRaphson(::testFunction, ::testFunctionDX, 4.0, 10)

Output:

3.454132980236982 3.4940007632328425 3.4985196261168254 3.499005684799097 3.499057613598004 3.49906315739022 3.499063749185194 3.499063812358257 3.4990638191018633 3.4990638198217305

Evidently, this method converges very quickly as well, reaching a steady state with 4 digits of accuracy after just 4 iterations.

**Implementation**

In NM Dev, the class NewtonRoot implements the Newton-Raphson root-finding algorithm, together with a solve function.

Note that the second solve function allows you to supply the derivative function 𝑓’ for 𝑓. Otherwise, a derivative function computed using finite differencing is automatically

generated by the code (though slower).

` ````
```UnivariateRealFunction f = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return return x * x + 4 * x - 5; // x^2 +4x - 5 = 0
}
};
UnivariateRealFunction df = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return 2 * x + 4; // 2x + 4
}
};
NewtonRoot solver = new NewtonRoot(1e-8, 5);
double root = solver.solve(f, df, 5.);
double fx = f.evaluate(root);
System.out.println(String.format("f(%f) = %f", root, fx));

Output:

f(1.000000) = 0.000000

We can see from the iteration results below that the algorithm already reaches very good result in the 4th iteration, hence very good efficiency.

Iterations i | Error | |

0 | 5 | |

1 | 2.142857143 | 1.33E+00 |

2 | 1.157635468 | 8.51E-01 |

3 | 1.003934739 | 1.53E-01 |

4 | 1.000002577 | 3.93E-03 |

5 | 1.000000000001107 | 2.58E-06 |

6 | 0.9999999999999999 | 1.11E-12 |

**Implementation**

Similar to the Newton-Raphson method, Halley’s Method can be optionally supplied with the first and second derivatives of the function.

` ````
```UnivariateRealFunction f = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return x * x + 4 * x - 5; // x^2 +4x - 5 = 0
}
};
UnivariateRealFunction df = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return 2 * x + 4; // 2x + 4
}
};
UnivariateRealFunction d2f = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return 2; // 2
}
};
HalleyRoot solver = new HalleyRoot(1e-8, 3);
double root = solver.solve(f, df, d2f, 5.);
double fx = f.evaluate(root);
System.out.println(String.format("f(%f) = %f", root, fx));

Output:

The method converges in just 3 iterations.