In the previous topic, we have already learnt about polar decomposition and its implementation using S2. In this topic, let us learn about various other decompositions and their respective implementations using S2.
Reducing a matrix to its constituent parts is known as a matrix decomposition.
This approach is simpler when it comes to dealing with more complex operations on matrices as it’s always easier to perform them on a decomposed matrix than the original matrix itself.
Let us take a closer look at various matrix decomposition methods.
LU Decomposition
This method of decomposition of a square matrix , decomposes it into two matrices namely,
and
.
where:
is a lower-triangular matrix and
is an upper-triangular matrix.
Doolittle Algorithm:
It is always possible to decompose a square matrix into a lower triangular matrix and an upper triangular matrix, i.e.,
- In this algorithm, we assume that, for an
matrix
, there exists an
decomposition and write the form of
and
explicitly.
- Now, we solve for
and
systematically from the equation
.
/*LU decomposition by the Doolittle process*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf
(doubleArrayOf(1.0, 2.0, 3.0),
doubleArrayOf(4.0, 5.0, 6.0),
doubleArrayOf(7.0, 8.0, 9.0)))
// LU decomposition
val LU = Doolittle(A1)
val L = LU.L()
val U = LU.U()
println("doolittle LU decomposition:\nL: $L,\nU: $U\n")
// verification
val A2 = L.multiply(U) // L*U = A1 = A2
val ZERO = A1.minus(A2)
val det: Double = MatrixMeasure.det(ZERO)
println("A1 - A2 = $det")
doolittle LU decomposition: L: 3x3 [,1] [,2] [,3] [1,] 1.000000, 0.000000, 0.000000, [2,] 0.142857, 1.000000, 0.000000, [3,] 0.571429, 0.500000, 1.000000, , U: 3x3 [,1] [,2] [,3] [1,] 7.000000, 8.000000, 9.000000, [2,] 0.000000, 0.857143, 1.714286, [3,] 0.000000, 0.000000, 0.000000, A1 - A2 = 0.0
QR Decomposition
This method of decomposition of a matrix (not limited to square matrices), decomposes into two matrices namely,
and
.
where:
is an
matrix,
is an
matrix and
is an
upper-triangular matrix.
Gram – Schimdt Process:
In this process, we will consider vectors as the columns of a matrix whose decomposition we wish to compute:
The resulting QR Decomposition:
/*QR decomposition by the Gram-Schmidt process*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf
(doubleArrayOf(3.0, 2.0),
doubleArrayOf(1.0, 2.0)))
// QR decomposition by the Gram-Schmidt
val GS = GramSchmidt(A1, true, 0.0) // true: parallel computation
val Q = GS.Q()
val R = GS.R()
println("gram-schmidt QR decomposition:\nQ: $Q,\nR: $R\n")
// verification
val A2 = Q.multiply(R); // QR = A1 = A2
val det: Double = MatrixMeasure.det(A1.minus(A2))
println("A1 - A2 = $det")
gram-schmidt QR decomposition: Q: 2x2 [,1] [,2] [1,] 0.948683, -0.316228, [2,] 0.316228, 0.948683, , R: 2x2 [,1] [,2] [1,] 3.162278, 2.529822, [2,] 0.000000, 1.264911, A1 - A2 = 0.0
Cholesky Decomposition
This method of decomposition is for square symmetric matrices where all eigenvalues are greater than zero (positive definite matrices). The cholesky decomposition is defined as follows:
where:
is the square symmetric matrix being decomposed,
is the lower triangular matrix, and
is the transpose of
.
This decomposition can also be written as the product of the upper triangular matrix as below:
where is the upper triangular matrix.
/*Cholesky decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(
doubleArrayOf(2.0, 1.0, 1.0),
doubleArrayOf(1.0, 2.0, 1.0),
doubleArrayOf(1.0, 1.0, 2.0)))
// Cholesky decomposition
val cholesky = Chol(A1)
println("cholesky decomposition: L: ${cholesky.L()}")
// verification
val L = cholesky.L()
val A2 = L.multiply(L.t()) // L * L' = A1 = A2
println("$A1 =\n$A2")
cholesky decomposition: L: 3x3 [,1] [,2] [,3] [1,] 1.414214, 0.000000, 0.000000, [2,] 0.707107, 1.224745, 0.000000, [3,] 0.707107, 0.408248, 1.154701, 3x3 [,1] [,2] [,3] [1,] 2.000000, 1.000000, 1.000000, [2,] 1.000000, 2.000000, 1.000000, [3,] 1.000000, 1.000000, 2.000000, = 3x3 [,1] [,2] [,3] [1,] 2.000000, 1.000000, 1.000000, [2,] 1.000000, 2.000000, 1.000000, [3,] 1.000000, 1.000000, 2.000000,
Singular Value Decomposition
The singular value decomposition or SVD is one of the most powerful ideas in linear algebra which is obtained by combining the polar decomposition and the spectral theorem we discussed in the previous topics.
Suggestion: Revise the polar decomposition and the spectral theorem for a better understanding of the SVD.
The polar decomposition tells us that any square matrix A is almost the same as some symmetric matrix,
and the spectral theorem tells us that a symmetric matrix is almost the same as a simple scaling along the
coordinate axes. (In both cases, the phrase “almost the same” disguises a composition with an orthogonal
transformation.) We should be able to combine these ideas to conclude that any square matrix is basically
the same as a simple scaling along the coordinate axes!
To be more precise, let’s suppose that is a square matrix, then the polar decomposition tells that:
where is an orthogonal matrix.
The spectral theorem tells that:
where:
is a diagonal matrix, and
is a diagonal matrix with nonnegative diagonal entries.
Combining these both the equations, we get:
Since a product of orthogonal matrices is orthogonal, we can define and obtain the singular value
decomposition (SVD) of :
where, and
are orthogonal matrices.

We can visualize geometrically making a figure like the one shown here, which illustrates the successive effects of each map in the composition . If we draw grid lines on the second plot (just before
is applied) and propagate those grid lines to the other plots, then we endow the domain and range of A with orthogonal sets of gridlines with
mapping one to the other.
Definition:
Suppose that is an
matrix with
. Then there exist matrices
and
with orthonormal columns and a diagonal matrix
such that:
where:
is an
matrix,
is an
matrix, and
is an
matrix.
Similarly, if , there exist
, and
satisfying the above properties such that:
where:
is an
matrix,
is an
matrix, and
is an
matrix.
We call as the singular value decomposition (or SVD) of
.The diagonal entries of
are called the singular values of
.
/*SVD Decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(
doubleArrayOf(1.5, 2.0, 4.96, 6.37, 36.01),
doubleArrayOf(3.0, 4.0, 99.0, 12.24, 369.6),
doubleArrayOf(73.0, 63.4, 0.19, 24.0, 99.39),
doubleArrayOf(25.36, 9.31, 66.78, 88.12, 36.03),
doubleArrayOf(15.51, 63.25, 36.1, 45.68, 111.0),
doubleArrayOf(55.0, 66.0, 12.3, 79.0, 121.0),
doubleArrayOf(55.5, 66.9, 12.0, 79.98, 97.001),
doubleArrayOf(55.0, 66.0, 12.0, 79.0, 97.0)))
// SVD decomposition
val svd = SVD(A1, true) // true: compute also U and V in addition to the singular values
val D = svd.D()
val U = svd.U()
val V = svd.V()
println("SVD decomposition:\nU: $U,\nD: $D,\nV: $V\n")
// verification
val A2 = U.multiply(D).multiply(V.t()); // UDV' = A1 = A2
val ZERO = A1.minus(A2)
println("$ZERO is 0")
SVD decomposition: U: 8x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.075225, -0.034244, 0.007191, -0.008976, -0.191550, [2,] -0.757550, -0.630250, -0.051853, 0.097192, -0.067278, [3,] -0.251817, 0.259223, 0.513336, 0.518812, 0.568252, [4,] -0.154635, 0.262590, -0.849993, 0.326621, 0.263552, [5,] -0.281704, 0.145986, -0.029208, -0.781017, 0.526654, [6,] -0.320250, 0.354995, 0.085016, -0.038076, -0.407829, [7,] -0.275482, 0.402217, 0.039365, -0.042877, -0.247181, [8,] -0.274460, 0.395411, 0.040333, -0.036741, -0.248243, , D: 5x5 [,1] [,2] [,3] [,4] [,5] [1,] 479.129823, 0.000000, 0.000000, 0.000000, 0.000000, [2,] 0.000000, 193.023289, 0.000000, 0.000000, 0.000000, [3,] 0.000000, 0.000000, 91.257422, 0.000000, 0.000000, [4,] 0.000000, 0.000000, 0.000000, 34.952854, 0.000000, [5,] 0.000000, 0.000000, 0.000000, 0.000000, 26.036639, , V: 5x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.160827, 0.463675, 0.267363, 0.796111, 0.232089, [2,] -0.200538, 0.528220, 0.367073, -0.597987, 0.434093, [3,] -0.222179, -0.133515, -0.666413, 0.053482, 0.697024, [4,] -0.232306, 0.619347, -0.563828, -0.075841, -0.488660, [5,] -0.911367, -0.323376, 0.178230, -0.002613, -0.181841, 8x5 [,1] [,2] [,3] [,4] [,5] [1,] -0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [2,] -0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [3,] 0.000000, -0.000000, 0.000000, 0.000000, -0.000000, [4,] -0.000000, 0.000000, -0.000000, -0.000000, -0.000000, [5,] -0.000000, -0.000000, -0.000000, 0.000000, -0.000000, [6,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [7,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, [8,] 0.000000, -0.000000, -0.000000, -0.000000, -0.000000, is 0
Hessenberg Decomposition
This method of decomposition is for square matrices and is defined as below:
where:
is a square matrix,
is a Hessenberg matrix, and
is an orthogonal matrix.
In this case, and so
.
/*Hessenberg Decomposition*/
%use s2
// Create a matrix
val A1 = DenseMatrix(arrayOf(doubleArrayOf(1.0, 5.0, 7.0), doubleArrayOf(3.0, 0.0, 6.0), doubleArrayOf(4.0, 3.0, 1.0)))
// Hessenberg decomposition
val decomp = HessenbergDecomposition(A1)
val H1 = decomp.H()
val Q = decomp.Q()
println("hessenberg decomposition:\nH: $H1,\nQ: $Q\n")
// verification
val H2 = Q.t().multiply(A1).multiply(Q); // Q'*A1*Q = H1 = H2
val ZERO = H1.minus(H2)
val det: Double = MatrixMeasure.det(ZERO)
println("H1 - H2 = $det")
hessenberg decomposition: H: 3x3 [,1] [,2] [,3] [1,] 1.000000, -8.600000, 0.200000, [2,] -5.000000, 4.960000, -0.720000, [3,] 0.000000, 2.280000, -3.960000, , Q: 3x3 [,1] [,2] [,3] [1,] 1.000000, 0.000000, 0.000000, [2,] 0.000000, -0.600000, -0.800000, [3,] 0.000000, -0.800000, 0.600000, H1 - H2 = 0.0