## Introduction

A genetic algorithm is a search heuristic based on Charles Darwin’s theory of natural selection. According to this algorithm, the fittest individuals are selected for reproduction so that offspring of the next generation will be produced

Holland introduced GAs as a computational analog to adaptive systems. Based on the principle of evolution via natural selection, they use a population of individuals that undergo selection under the influence of variation-inducing operators such as mutations and recombinations (crossovers). To evaluate individuals, a tness function is used, and reproductive success varies with tness. As early implementations went, individuals were typically represented by a binary string of xed length, but other representations are now common, including arrays of floating point numbers, arrays of integers representing permutations, Lisp S-expressions, and variable-length strings over and under specified lengths. As a result, there are many algorithms that can be classified as GAs, and every aspect of Holland’s original formulation has been questioned and modified.

## Genetic

Natural selection begins with the selection of the fittest individuals from a population. Their offspring inherit the characteristics of their parents and will contribute to the next generation. If parents are fitter than their offspring, their offspring will be better and will have a better chance of surviving. Eventually, a generation with the fittest individuals will emerge.

## Genetic Algorithm

Genetic Algorithm consist of 5 steps and those are as follows:

## The Initial Function

A population consists of a set of individuals. Every individual represents a solution to the problem.

Genes are a set of parameters (variables) that characterize an individual. Chromosomes (solution) are formed by joining genes in a string.

A genetic algorithm represents an individual’s genes as an alphabetical string. Binary values (strings of 1s and 0s) are usually used. We say that genes are encoded in chromosomes.

## The Fitness Function

An individual’s fitness function determines how fit they are (their ability to compete with others). It gives each individual a fitness score. Fitness scores determine the likelihood that an individual will be selected for reproduction.

## The Selection Function

The selection phase is designed to select the fittest individuals and pass their genes on to the next generation.

Fitness scores are used to select two pairs of individuals (parents). Individuals with high fitness have a higher chance of being selected for reproduction.

## The Crossover Function

A genetic algorithm’s most important phase is crossover. A crossover point is randomly chosen within the genes for each pair of parents to be mated.

New offspring are created by exchanging genes among the parents until a crossover point is reached. These new offspring are added to the population.

## Mutation Function

Certain new offspring can be subjected to a recombination process.

Random mutation with a low probability. Thus, some bits in the bit string can be flipped.

Mutations maintain population diversity and prevent premature convergence.

## Termination Function

The algorithm terminates if the population has converged (has produced offspring that are not significantly different from the previous generation). As a result, the genetic algorithm has provided a set of solutions for our problem.

Some advantages of genetic algorithms are the following:

• The performance of these techniques is usually better than that of traditional feature selection methods.
• It is possible to manage data sets with a variety of features using genetic algorithms.
• It is not necessary for them to have specific knowledge about the problem they are studying.
• Computer clusters can easily parallelize these algorithms.

• The computational cost of genetic algorithms might be high. Each individual must be evaluated using a model that has been trained.
• Due to their stochastic nature, these algorithms can take a long time to converge.

## Implementation

Use the magical word %use S2 or your code might not work properly. Here we have objectified a function which minimizes.

				
// the objective function to minimize
%use s2
val f: RealScalarFunction = object : AbstractBivariateRealFunction() {
override fun evaluate(x: Double, y: Double): Double {
return x * x + y * y // x^2 + y^2
}
}



Using the random number generator to generate random numbers.

				
// a uniform random number generator
val rng: RandomLongGenerator = UniformRNG()
rng.seed(123456798L)




We have constructed an instance for our GA. with different operations while using random numbers, parameters and iterations for the algorithm.

				
// construct an instance of the genetic algorithm solver
val solver = SimpleGridMinimizer(
// define the encodinng, crossover and mutation operator
object : SimpleGridMinimizer.NewCellFactoryCtor {
override fun newCellFactory() : SimpleCellFactory {
return SimpleCellFactory(0.1, rng)
}
},
rng, // source of randomness for the GA
1e-15, // a precision parameter
500, // the maximum number of iterations
500 // the maximum number of iterations of no improvement
)



Here we are solving the optimization problem by running the operation.

				
// run the solver to solve the optimization problem
val soln: IterativeSolution
= solver.solve(C2OptimProblemImpl(f))
val xmin: Vector = soln.search(
// the boundaries: [-10, 10], [-10, 10]
DenseVector(-10.0, 10.0),
DenseVector(10.0, -10.0),
DenseVector(10.0, 10.0),
DenseVector(-10.0, -10.0)
)


				
val fxmin: Double = f.evaluate(xmin) // the mimimum
println("Genetic algorithm")

println(String.format("f(%s) = %f", xmin, fxmin))



## Conclusion

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Machine learning is becoming increasingly dependent on input selection. A neural network must select the most useful features from a large number of data sets.

One of the most advanced methods to do that is the genetic algorithm.

Genetic algorithms can select the best subset for our model variables , but they usually requires a lot of computations.