The partial derivative of a multivariate function is the derivative of the function with respect to one of the variables that defines the function while the other variables being taken as constant. For example, the partial derivative of a bivariate function with respect to is given as

The above equation can be computed numerically just as any of the univariate function but with a change, as we impose a rid on a plane or higher dimensional domain instead of an axis.

A second order partial differential equation can be written as

A second order mixed partial equation is when we first differentiate with respect to one variable and then the other. For exaple, a second order mixed partial derivative with respect to and then can be written as

A highert order partial and mixed derivaties looks like

For example,

The graph of this function defines a surface in Euclidean space. To every point on this surface, there are infinite number of tangent lines. Partial differentiation is the act of choosing one of these lines and finding the slope. Usually, the lines of most interest are those that are parallel o the -plane, when holding constant, and those that are parallel to the -plane while holding constant.

To find the slope of the line tangent to the function at (1,1) and parallel to the -plane, we treat as a constant. By finding the derivative of the equation while assuming that is a constant, we find that the slope of at the point is

It evaluates to 3 at point (1,1). The following code snippet sloves this problem.

` ````
```// f = x^2 + xy + y^2
val f: RealScalarFunction f = new AbstractBivariateRealFunction() {
override public double evaluate(double x, double y) {
}
};
// df/dx = 2x + y
MultivariatefinitiDifference dx = new MultivariateFiniteDifference(f, new int[]{1})
system.out.println(String.format("Dxy(1,1) %f", dx.evaluate(new DenseVector(1, 1))));

The output is:

` ````
```Dxy(1,1) = 3.000000

## Gradient

The gradient function, , of a multivariate real-valued function, , is a vector-valued function or a vector field from to . It takes a point in and it outputs a vector in . The gradient at any point, is a vector. It has components that are the partial derivaties of at . Mathemnatically, it can be represented as

Each components, is the partial derivative of the function along an axis and is the rate of change in that direction. So, the gradient vector is like the first order derivative or the slope or the tangent in the case of a univariate function. It can be interpreted as the direction and rate of fastest increase. If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, which is the greatest absolute directional derivative. Furthermore, the gradient is the zero vector at a point if and only if it is a stationary point, where the derivativ vanishes. The gradient thus plays an important and fundamental role in optimization theory.

For an example, consider a room where the temperature is given by a scalar field . Now the multivariate function or the field takes a point and gives a temperature value . At each point in the room, the gradient of at that point shows the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient determines how fast the temperature rises in that direction. More generally, if a multivariate function is differentiable, then the dot product between an a unit vector is the slope or rate of change of the function in the direction of , called he directional derivative of . The multivariate version of Taylor expansion shows that the best linear approximation of a function can be expressed in terms of the gradient.

and are points in the space. maps and . The dot product in the last term gives a real number.

Consider this function for example,

The following code computes the gradients and gradient function of the function.

` ````
```// f = x * exp(-(x^2 + y^2))
val f: RealScalarFunction f = new AbstractBivariateRealFunction() {
override public double evaluate(double x, double y) {
return x * exp(-(x * x + y * y));
}
};
Vector x1 = new DenseVector(0, 0);
Vector g1_0 = new Gradient(f, x1);
System.out.println(String.format("gradient at %s = %s", x1, g1_0));
GradientFunction df = new GradientFunction(f);
Vector g1_1 = df.evaluate(x1);
System.out.println(String.format("gradient at %s = %s", x1, g1_1));
Vector x2 = new DenseVector(-1, 0);
Vector g2 = df.evaluate(x2);
System.out.println(String.format("gradient at %s = %s", x2, g2));
Vector x3 = new DenseVector(1, 0);
Vector g3 = df.evaluate(x3);
System.out.println(String.format("gradient at %s = %s", x3, g3));

The output is:

` ````
```gradient at [0.000000, 0.000000] = [1.000000, 0.000000]
gradient at [0.000000, 0.000000] = [1.000000, 0.000000]
gradient at [-1.000000, 0.000000] = [-0.367879, 0.000000]
gradient at [1.000000, 0.000000] = [0.367879, 0.000000]

## Jacobian

The error term approaches zero much faster than the distance between and similarly as approaches . So, summarizing we can say that the Jacobian can be regarded as the “first-ordered derivative” of a vector-valued function of several variables.

The following code solves this example.

` ````
```val f: RealVectorFunction F = new RealVectorFunction() {
override public Vector evaluate(Vector v) {
double x = v.get(1);
double y = v.get(2);
double f1 = x * x * y;
double f2 = 5 * x + sin(y);
return new DenseVector(f1, f2);
}
override public int dimensionOfDomain() {
return 2;
}
override public int dimensionOfDomain() {
return 2;
}
};
Vector x0 = new DenseVector(0, 0);
Matrix J00 = new Jacobian(F, x0);
System.out.println(String.format("the Jacobian at %s = %s, the det = %f",
x0,
J00,
MatrixMeasure.det(J00)));
RntoMatrix J = new JacobianFunction(F); // [2xy, x^2], [5, cosy]
Matrix J01 = J.evaluate(x0);
System.out.println(String.format("the Jacobian at %s = %s, the det = %f",
x0,
J01,
MatrixMeasure.det(J01)));
Vector x1 = new DenseVector(1, PI);
Matrix J1 = J.evaluate(x1);
System.out.println(String.format("the Jacobian at %s = %s, the det = %f",
x1,
J1,
MatrixMeasure.det(J1)));

The output is:

` ````
```the Jacobian at [0.000000, 0.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 0.000000
[2,] 5.000000, 1.000000, , the det = 0.000000
the Jacobian at [0.000000, 0.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 0.000000
[2,] 5.000000, 1.000000, , the det = 0.000000
the Jacobian at [1.000000, 3.141593] = 2x2
[,1] [,2]
[1,] 6.283185, 1.000000
[2,] 5.000000, -1.000000, , the det = -11.283185

## Hessian

Now, suppose . Now, if all second order derivatives of exist and are continuous over the domain of the function, then the Hessian matrix of is a square matrix, usually defined and arranged as the following

` ````
```val f: RealScalarFunction F = new AbstractBivariateRealFunction() {
override public double evaluate(double x, double y) {
return x * y; // f = xy
}
};
Vector x1 = new DenseVector(1, 1);
Hessian H1 = new Hessian(f, x1);
System.out.println(String.format("the Hessian at %s = %s, the det = %f",
x1,
H1,
MatrixMeasure.det(H1)));
Vector x2 = new DenseVector(0, 0);
Hessian H2 = new Hessian(f, x2);
System.out.println(String.format("the Hessian at %s = %s, the det = %f",
x2,
H2,
MatrixMeasure.det(H2)));
RntoMatrix H = new HessianFunction(f);
Matrix Hx1 = H.evaluate(x1);
System.out.println(String.format("the Hessian at %s = %s, the det = %f",
x1,
Hx1,
MatrixMeasure.det(Hx1)));
Matrix Hx2 = H.evaluate(x2);
System.out.println(String.format("the Hessian at %s = %s, the det = %f",
x2,
Hx2,
MatrixMeasure.det(Hx2)));

The output is:

` ````
```the Hessian at [1.000000, 1.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 1.000000
[2,] 1.000000, 0.000000, , the det = -0.999999
the Hessian at [0.000000, 0.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 1.000000
[2,] 1.000000, 0.000000, , the det = -1.000000
the Hessian at [1.000000, 1.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 1.000000
[2,] 1.000000, 0.000000, , the det = -0.999999
the Hessian at [0.000000, 0.000000] = 2x2
[,1] [,2]
[1,] 0.000000, 1.000000
[2,] 1.000000, 0.000000, , the det = -1.000000