**Solving a System of Linear Equations**

Currently, we have only been making use of graphical methods to obtain the solution, but there are ways to do it algebraically. We will be looking at how to obtain these solutions using Substitution and Gaussian Elimination.

**Substitution Method**

First, let’s take the example of this system of equations.

1.

2.

3.

We can write in terms of other variables to substitute it into other equations. From the third equation,

We then substitute this value into the second equation,

We will get an equation of in terms of .

Then we proceed to substitute the value of and into the first equation.

With this value of , we can then solve for and then by substituting back the values of .

Therefore, the solution to this system is

**Gaussian Elimination Method**

Gaussian elimination is an algorithm for solving systems of linear equations. It involves transforming a matrix, which will represent a linear system of equations, into Row Echleon form, where it can be easily used to solve a system.

Firstly, to represent a system with a matrix, we will follow the format of

where the system of equations is

1.

2.

3.

The columns correspond to the individual variables (e.g. the first column is the coefficients for , second for , etc.) and the rows represent the equation(first row represents the first equation, etc.).

**Row Echleon Form**

By plotting the system of equations into a matrix, we are able to simplify the matrix into row echleon form to compute the solution. Firstly, what is row echleon form? A matrix is in row echleon form if it fulfills these conditions.

1. All rows consisting of only zeroes are at the bottom.

2. The leading coefficient (also called the pivot) of a nonzero row is always

to the right of the leading coefficient of the row above.

In summary, a matrix in row echleon form will look something like this:

From this matrix, one can then determine the value of , and thus back substitute the value of into the second equation to solve for , and so on, until you solve for all the variables.

**Reduced Row Echleon Form**

You can further simplfy the matrix into a reduced row echleon form when the matrix the leading entry in each nonzero row is a 1 (called a leading 1) and the values above a leading entry is 0. The matrix will therefore look like this:

which will allow us to simply derive the values of , and by just looking at the equation.

**Thinking Time**:

Which is more efficient to use, Row Echleon form or Reduced Row Echleon form? Would there be situations where the Row Echleon form would be enough to straight away solve a system of equations without further reducing to a reduced row echleon form?

**Transformation Operations**

There are mainly three ways to simplfy a matrix into Row Echleon form, which are

1. Row switching: switch one row with another row

2. Row multiplication: multiply a row by a non–zero constant

3. Row addition/subtraction: add the multiple of another row to a row.

Row Switching would simply switch the position of one row with another. In this case,

We can switch Row 2 and Row 3 to turn the matrix into Row Elchleon Form, where it will look like

Row Multiplication can be used to scale the rows by a factor of a non-zero constant,n.

In this example, we scaled the second row by a factor of 2.

Row Addition/Subtraction can be used to add or subtract values from one row with that of another row. A matrix

can be converted into Row Echleon form by subtracting Row 3 with 1/2 of Row 2, resulting in

**Real-life Example**

Lets use a real-life example to illustrate turning a system of equations into matrix form, then simplifying to row echleon form and reduced row echleon form.

Assume that you want to buy a computer from this peculiar computer shop, where you can freely customise all of the parts by removing or adding them to your laptop, which will add or subtract from the cost of the laptop. You are able to customise three main components, which are the CPU, SSD and GPU, which we will define as , and respectively. You wanted to find out how much each component actually costs, and hence decided to do an experiment. You tested 3 combinations, where you either added or removed one of these three components from your laptop. In the first post, you added 2 CPUs and a SSD, while removing a GPU, which costed $7. In the second, you added 2 CPUs and 2 GPUs, while removing a SSD, which costed $6. While for the last, you added a CPU and a GPU, while removing 2 SSDs, costing $0.

You can write the three combinations as this system of equation,

1.

2.

3.

which can be represented in the matrix as

This system is now represented as a Matrix**. **To simplfy it to Row Echleon Form, we will first need to remove the leading coefficient in Row 2, which is a 2. We will simply perform the operation of subtracting Row 2 with Row 1 to turn the leading coefficient in Row 2 to a 0.

Next, we will have to turn the leading coefficient in Row 3 to a 0. This can be done by subtracting Row 3 with 1/2 of Row 1

Next, we will remove the leading coefficient from Row 3, which is now -2.5. This is done by subtracting Row 3 with 5/4 of Row 2

The matrix is now in Row Elchleon Form. We can also do the same by using the NM Dev class GaussianElimination*,* which performs elementary row operations to reduce a matrix to the row echleon form.

` ````
```%use s2
//var a is the matrix for the linear system
var a = DenseMatrix(arrayOf(
doubleArrayOf(2.0, 1.0, -1.0, 7.0),
doubleArrayOf(2.0, -1.0, 2.0 ,6.0),
doubleArrayOf(1.0, -2.0, 1.0,0.0)
))
// GaussianElimination(A,usePivoting,epsilon) takes in 3 parameters:
// A - a matrix
// usePivoting - true if to use partial pivoting, e.g., for numerical stability.
// epsilon - a precision parameter: when a number |x| â‰¤ Îµ, it is considered 0
val ops:GaussianElimination = GaussianElimination(a, true, 0.0)
//GaussianElimination has 4 methods, but we will only be looking at 1,
//which is U, as it returns to us a upper triangular matrix,
//which is the matrix in row elchleon form
val ans:Matrix = ops.U()
print(ans)

output: 3x4 [,1] [,2] [,3] [,4] [1,] 2.000000, 1.000000, -1.000000, 7.000000, [2,] 0.000000, -2.500000, 1.500000, -3.500000, [3,] 0.000000, 0.000000, 1.800000, 1.800000,

This will return a different matrix that is also in Row Echleon Form**, **which looks like

Now, we can begin deriving a solution from this simplified matrix. From the first matrix, we can already tell that

Using the value , we can then derive the value of using Row 2,

Lastly, we can use and to derive the value of in Row 1,

Therefore, using a matrix in Row Echleon Form**, **we have solved for the solution, where

We can use NM Dev class BackwardSubstitution to implement a backward substitution algorithm that will compute the answer in the same way as our manual working. It works on an upper triangular matrix, which in our case means a matrix in Row Echleon Form, and starts first by computing , then substitutes that backward into the next equation to solve forÂ , and repeats until . An example of the code is shown below.

` ````
```val tri:UpperTriangularMatrix = UpperTriangularMatrix(
arrayOf(
doubleArrayOf(2.0, 1.0, -1.0),
doubleArrayOf(-2.5, 1.5),
doubleArrayOf(1.8)
)) // matrix of the variables in upper triangular shape
val b:Vector = ans.getColumn(4) //column for constants
val solver:BackwardSubstitution = BackwardSubstitution()
print(solver.solve(tri,b))

output:[3.000000, 2.000000, 1.000000]

This gives us the same answer of

We have now deduced that a component of CPU costs $3, while SSD costs $2 and a GPU costs $1.

We are also able to further reduce the matrix into Reduced Row Echleon Form. Starting from this matrix,

we will try to manipulate the leading entry to a 1 while ensuring that each column containing a leading 1 has zeros in all its other entries. Firstly, we can divide Row 3 by 1.8,

then we can divide Row 2 by -2.5,

then we can subtract Row 1 with Row 2 to give us

From here, we can divide Row 1 by 2

Now, we will have to subtract Row 2 with -0.6 of Row 3

And add 0.2 of Row 3 to Row 1

From here, the Matrix is finally now in Reduced Row Echleon Form, and we can easily deduce the answer from looking at the matrix

Though reducing the matrix into Reduced Row Echleon Form takes a longer time to compute, it is does not require backward substitution and the solution is obvious when the matrix is in such a form, while it may not be so for a matrix in Row Echleon Form. Taking the same matrix, we can use the NM Dev class GaussJordanElimination to simplfy our matrix into Reduced Row Echleon Form and obtain the answers for , and **.**

` ````
```%use s2
//var a the matrix for the linear system
var a = DenseMatrix(arrayOf(
doubleArrayOf(2.0, 1.0, -1.0, 7.0),
doubleArrayOf(2.0, -1.0, 2.0 ,6.0),
doubleArrayOf(1.0, -2.0, 1.0,0.0)
))
// GaussJordanElimination(A,usePivoting,epsilon) takes in 3 parameters:
// A - a matrix
// usePivoting - true if to use partial pivoting, e.g., for numerical stability.
// epsilon - a precision parameter: when a number |x| â‰¤ Îµ, it is considered 0
val ops:GaussJordanElimination = GaussJordanElimination(a, true, 0.0)
val ans:Matrix = ops.U()
print(ans)

output:3x4 [,1] [,2] [,3] [,4] [1,] 1.000000, 0.000000, 0.000000, 3.000000, [2,] 0.000000, 1.000000, 0.000000, 2.000000, [3,] 0.000000, 0.000000, 1.000000, 1.000000,

**Homogeneous and non-Homogeneous systems**

Usually, in a consistent system of linear equations, the number of equations is smaller than or equal to the number of variables. Otherwise, it is an over determined system that usually results in no solutions. For example, a system with 4 equations that are linearly independent but only 3 variables is a overdetermined system. We can represent the system in a similar way,

where is the matrix of coefficients of unknowns for the system of equations, is the vector of the values of the unknowns, and is the constant vector. For a system

1.

2.

3.

It can be represented in this form as

We can view as a linear transformation or a function that maps from one linear space to another linear space . The kernel of a linear map, , is the linear subspace of the domain such that the function maps all its elements to 0. In essence, it maps all elements of where .

For any homogeneous equation,

there exists a solution where zero is assigned to the each of the variables. If is non-singular(), then it is the only solution. Otherwise, the dimensions or rank of kernel of is not , , there is a solution set with infinite number of solutions. Every vector in the kernel is a non-trivial solution. This solution set also have these properties:

1. If and are vectors representing solutions to the homogeneous system, then the vector sum is also a solution

2. If is a solution to the homogeneous system, then is also a solution, provided is a scalar

For a non-homogeneous equation,

we can assume is a particular solution to the linear system such that . In such a case, assuming is an element of , is the solution set of the homogeneous system. Geometrically, the solution set for is a translation from the solution set of . More specifically, the Euclidean subspace for the first system can be obtained by translating the linear subspace of the homogeneous system by the vector . This only applies if the system has at least one solution and occurs if and only if vector lies in the image of the linear transformation .

In NM Dev, the class LinearSystemSolver solves a system of equation in the form of , which assumes, after row reduction, no more rows than columns. Hence the system must not be overdetermined. The system is solved in 2 parts:

1. Solve for non-trivial solutions

2. Solve for particular solutions

If has full rank, it will then solve the system by LU decomposition. Otherwise, a particular solution is found by , where is the transformation matrix of to reduced row echleon form.

Note: A matrix is not overdetermined as long as the number of linearly independent equations are lesser than the number of variables. For example, this matrix of 3 equations but 2 variables

is not overdetermined, since the first and third equation are linearly dependent.

The final solution can be found by translating the nullspace of by the particular solution,

To illustrate, we can use this linear system

The basis for the kernel is such that

while the particular solution is

Overall, the particular solution , as well as all vectors of that is transformed by , are solutions for the system.

` ````
```%use s2
var A = DenseMatrix(arrayOf(
doubleArrayOf(0.0, 1.0, 2.0, -1.0),
doubleArrayOf(1.0, 0.0, 1.0 , 1.0),
doubleArrayOf(-1.0, 1.0, 0.0, -1.0),
doubleArrayOf(0.0, 2.0, 3.0, -1.0)
))
var b = DenseVector(arrayOf(1.0, 4.0, 2.0 ,7.0))
// construct a linear system solver
val solver:LinearSystemSolver = LinearSystemSolver(1e-15) // precision
// solve the homogenous linear system
val soln: LinearSystemSolver.Solution = solver.solve(A)
// get a particular solution
val p:Vector = soln.getParticularSolution(b)
println("Particular solution: $p")
//proof Ap=b
var Ap = A.multiply(p)
println("Ap = $Ap")
println("Ap = b: " + (Ap==b)) // hence p is a valid solution for the system Ax=b
// get the ker(A)
// kernel is stored as a list of different vectors which
// result in the value of Ax=0
val kernel:List = soln.getHomogeneousSoln()
println("Ker(A) = $kernel")
// from this, the kernel has only a size of 1
// another solution of A would be ker(A) + p
val k:Vector = kernel[0]
// proof that A(p+k) = b
var x = k.add(p)
println("p+k = $x")
var Ax = A.multiply(x)
println("A(p+k) = $Ax")
println("Ax = b: "+(Ax==b)) // since Ax==b, p+k is a valid solution for the system Ax=b

output:

Particular solution: [9.000000, 11.000000, -5.000000, 0.000000] Ap = [1.000000, 4.000000, 2.000000, 7.000000] Ap = b: true Ker(A) = [[-2.000000, -1.000000, 1.000000, 1.000000] ] p+k = [7.000000, 10.000000, -4.000000, 1.000000] A(p+k) = [1.000000, 4.000000, 2.000000, 7.000000] Ax = b: true

Note: the solve function of the class LinearSystemSolver returns an object of class LinearSystemSolver.Solution.