Eigen Pairs

In this section, let us develop our understanding of linear transformation by breaking it down into simpler linear transformations.

- Let be a linear transformation from to .

Suppose:

- is a basis of
- is the span of some of the vectors in
- is the span of the remaining vectors in .

Then:

- Any vector in can be written as the sum of a vector in and a vector in .
- Since , we can see how behaves on all of if we know how it behaves on and on .
- This decomposition is particularly helpful if and are chosen so that behaves in a simple way on and on .

Given such a decomposition of into the vector spaces and , the same idea can be applied to split and into lower-dimensional vector spaces and repeat until no more splits are possible. The most optimistic outcome of this procedure would be that we get all the way down to one-dimensional subspaces and that acts on each of these subspaces by simply scaling each vector in that subspace by some factor. In other words, we would like to find vectors for which is a scalar multiple of . From this, the following definition can be obtained.

**Mathematical Definition:**

- An
**eigenvector**of an matrix has the property for some**∈**. - is the
**eigenvalue**of . - The eigenvector together with its eigenvalue is called
**eigenpair**.

Being more general, an eigenvector is a vector whose direction remains unchanged when a linear transformation is applied to it.

In the figure above, the red vectors are the eigenvectors because their directions didn’t change even after the application of a linear transformation(scaling in this case) while the magenta vector is not an eigenvector as its direction is changed after the linear transformation.

Q: Find the eigen pairs of a matrix

` ````
```%use s2
// Create a matrix
val A = DenseMatrix(arrayOf
(doubleArrayOf(1.5, 0.0, 1.0),
doubleArrayOf(-0.5, 0.5, -0.5),
doubleArrayOf(-0.5, 0.0, 0.0)))
// eigen decomposition
val eigen = Eigen(A, Eigen.Method.QR, 0.0)
println("eigenpairs:")
for (i in 0 until eigen.size()) {
// get the i-th eigenvalue
val eigenvalue = eigen.getEigenvalue(i)
// get the property associated with the eigenvalue
val property = eigen.getProperty(eigenvalue)
// get the eigen vector from property
val eigenVector = property.eigenVector()
println("eigenvalue: $eigenvalue; eigenvector: $eigenVector")
}

eigenpairs: eigenvalue: 1.0; eigenvector: [-2.000000, 1.000000, 1.000000] eigenvalue: 0.5; eigenvector: [-0.000000, 1.000000, 0.000000] eigenvalue: 0.49999999999999994; eigenvector: [-0.000000, 1.000000, 0.000000]

**Note:** Every non-zero vector is an eigenvector of the identity matrix with eigenvalue 1.

Eigen Decomposition

Let’s say, is an having linearly y independent eigenvectors, then one-dimensional subspaces spanned by each of these vectors can be thought of as (not necessarily orthogonal) axes along which acts by scaling.

Speaking in matrix terms, we can define to be the matrix with the eigenvectors of as columns. Then from the definition of an eigenpair, we have:

where is a diagonal matrix whose diagonal entries are the eigenvalues (in the order corresponding to the columns of ) and whose other entries are zero.

Thus, we can conclude that , where is a diagonal matrix and is **diagonalizable**.

S2 makes it easier to find the diagonal matrix for a given diagonalizable matrix .

` ````
```%use s2
// Create a symmetric matrix
val A = DenseMatrix(SymmetricMatrix(
arrayOf(doubleArrayOf(1.0),
doubleArrayOf(2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(7.0, 8.0, 9.0, 10.0))))
// eigen decomposition
val precision = 1e-4
val decomp = EigenDecomposition(A, precision)
val D = decomp.D() // get the diagonal matrix which contains the eigen values
println("eigen decomposition result: $D")
// verification
val Q = decomp.Q();
val Qt = decomp.Qt();
val B = Q.multiply(D).multiply(Qt); // QDQ' = A = B
println("$A =\n$B")

eigen decomposition result: 4x4 [,1] [,2] [,3] [,4] [1,] 24.638176, 0.000000, 0.000000, 0.000000, [2,] 0.000000, -0.038687, 0.000000, 0.000000, [3,] 0.000000, 0.000000, -0.513527, 0.000000, [4,] 0.000000, 0.000000, 0.000000, -4.085962, 4x4 [,1] [,2] [,3] [,4] [1,] 1.000000, 2.000000, 4.000000, 7.000000, [2,] 2.000000, 3.000000, 5.000000, 8.000000, [3,] 4.000000, 5.000000, 6.000000, 9.000000, [4,] 7.000000, 8.000000, 9.000000, 10.000000, = 4x4 [,1] [,2] [,3] [,4] [1,] 1.000000, 2.000000, 4.000000, 7.000000, [2,] 2.000000, 3.000000, 5.000000, 8.000000, [3,] 4.000000, 5.000000, 6.000000, 9.000000, [4,] 7.000000, 8.000000, 9.000000, 10.000000,

Positive Definite Matrices

A symmetric matrix is called a positive definite matrix if its eigenvalues are all positive. A symmetric matric matrix is a positive semidefinite matrix if its eigenvalues are all non-negative.

Equivalently, a matrix is positive semidefinite if for all .

Analogously, a symmetric matrix is called a negative definite matrix if its eigenvalues are all negative. A symmetric matric matrix is a negative semidefinite matrix if its eigenvalues are all non-positive.

In S2, we have a constructor **“PositiveDefiniteMatrixByPositiveDiagonal”** that converts a matrix into a symmetric, positive definite matrix, if it is not already so, by forcing the diagonal entries in the eigen decomposition to a small non-negative number.

` ````
```%use s2
// define a matrix
var A = DenseMatrix(arrayOf(
doubleArrayOf(1.0, 2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(-1.0, -2.0, -3.0)))
// A_PDM = Positive Definite Matrix of A
val epsilon = 1e-15
val A_PDM = PositiveDefiniteMatrixByPositiveDiagonal(A,epsilon,1.0)
println("POSITIVE DEFINITE MATRIX OF\n\n $A \n\n IS \n\n $A_PDM\n")

POSITIVE DEFINITE MATRIX OF 3x3 [,1] [,2] [,3] [1,] 1.000000, 2.000000, 3.000000, [2,] 4.000000, 5.000000, 6.000000, [3,] -1.000000, -2.000000, -3.000000, IS 3x3 [,1] [,2] [,3] [1,] 2.286186, 2.413588, 0.605269, [2,] 2.413588, 5.529211, 1.135817, [3,] 0.605269, 1.135817, 1.284835,

We also have another constructor in S2 named **“PositiveSemiDefiniteMatrixNonNegativeDiagonal” **which converts a matrix into a symmetric, positive semi-definite matrix, if it is not already so, by forcing the negative diagonal entries in the eigen decomposition to 0.

` ````
```%use s2
// define a matrix
var A = DenseMatrix(arrayOf(
doubleArrayOf(1.0, 2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(-1.0, -2.0, -3.0)))
// A_PSDM = Positive Semi-Definite Matrix of A
val epsilon = 100.0
val A_PSDM = PositiveSemiDefiniteMatrixNonNegativeDiagonal(A,epsilon)
println("POSITIVE SEMI-DEFINITE MATRIX OF\n\n $A \n\n IS \n\n $A_PSDM\n")

POSITIVE SEMI-DEFINITE MATRIX OF 3x3 [,1] [,2] [,3] [1,] 1.000000, 2.000000, 3.000000, [2,] 4.000000, 5.000000, 6.000000, [3,] -1.000000, -2.000000, -3.000000, IS 3x3 [,1] [,2] [,3] [1,] 7.100233, -0.000000, -0.000000, [2,] -0.000000, 7.100233, -0.000000, [3,] -0.000000, -0.000000, 7.100233,

**Spectral Theorem:**

If is an symmetric matrix, then is orthogonally diagonalizable, which means has eigenvectors which are pairwise orthogonal.

Conversely, every orthogonally diagonalizable matrix is symmetric.

Polar Decomposition

Before proceeding with the definition of polar decomposition, let us make ourselves familiar with “Gram Matrix”:

**Gram Matrix: **If is an matrix, then is its Gram matrix. The Gram matrix of is always positive semidefinite.

**Q: Determine the Gram Matrix of . **

` ````
```%use s2
// define a matrix
var A = DenseMatrix(arrayOf(
doubleArrayOf(1.0, 2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(-1.0, -2.0, -3.0)))
// A_T = Tranpose of A
val A_T = A.t()
//A_T*A = Gram Matrix of A
val A_GM = A_T.multiply(A)
println("THE GRAM MATRIX OF \n $A \n\n IS \n\n $A_GM\n")

THE GRAM MATRIX OF 3x3 [,1] [,2] [,3] [1,] 1.000000, 2.000000, 3.000000, [2,] 4.000000, 5.000000, 6.000000, [3,] -1.000000, -2.000000, -3.000000, IS 3x3 [,1] [,2] [,3] [1,] 18.000000, 24.000000, 30.000000, [2,] 24.000000, 33.000000, 42.000000, [3,] 30.000000, 42.000000, 54.000000,

Let’s define matrix to be , where is the diagonalization of and is the matrix obtained by taking the diagonal entries of . Then is symmetric and satisfies:

The matrix is easier to understand than because it is symmetric and positive definite. However, it transforms space in nearly same way as . If , then:

In other words, for all , the images of under and under have equal norm. This means that for each , there is an orthogonal transformation from the range of to the range of which sends

to . That means, this orthogonal transformation is the same for all .

**Fig:** From the above figure, we can observe that the grid-line images under and are of the same shape.

They are related by orthogonal transformation.

This orthogonal transformation can be extended to an orthogonal transformation on even if the range of is not all of . Hence, we arrive at a polar decomposition.

**Polar Decomposition:**

For any matrix , there exists an orthogonal matrix such that

Let us implement the polar decomposition using S2.

` ````
```%use s2
//define matrix A
var A = DenseMatrix(arrayOf(
doubleArrayOf(4.0, 5.0),
doubleArrayOf(6.0, 7.0)))
//AT is transpose of A
var AT = A.t()
//ATA = AT*A
var ATA = AT.multiply(A)
println(ATA)

2x2 [,1] [,2] [1,] 52.000000, 62.000000, [2,] 62.000000, 74.000000,

` ````
```%use s2
// define a matrix sqrt(A'A) from previous results
var RATA = DenseMatrix(arrayOf(
doubleArrayOf((52.0).pow(0.5), (62.0).pow(0.5)),
doubleArrayOf((62.0).pow(0.5), (74.0).pow(0.5))))
//Matrix A
var A = DenseMatrix(arrayOf(
doubleArrayOf(4.0, 5.0),
doubleArrayOf(6.0, 7.0)))
//calculate inverse of sqrt(A'A)
val INVRATA = Inverse(RATA)
// R = A*Inverse(sqrt(A'A))
val R = A.multiply(INVRATA)
println("R =\n\n $R\n")
//verification
var B = R.multiply(RATA)
println("\nR*sqrt(A'A) = \n\n $B \n\nSAME AS\n\n A = \n\n$A\n")

R = 2x2 [,1] [,2] [1,] -153.822883, 141.380679, [2,] -108.655461, 100.269860, R*sqrt(A'A) = 2x2 [,1] [,2] [1,] 4.000000, 5.000000, [2,] 6.000000, 7.000000, SAME AS A = 2x2 [,1] [,2] [1,] 4.000000, 5.000000, [2,] 6.000000, 7.000000,