Genetic Algorithm: History of Genetic Algorithms (GA), Working Principle, Various Encoding
methods, Fitness function, GA Operators- Reproduction, Crossover, Mutation, Convergence
of GA, Bit wise operation in GA, Multi-level Optimization.
GA based Backpropagation Networks: GAbased Weight Determination, K - factor
determination in Columns.
1. HISTORY OF GENETIC ALGORITHMS (GA)
1.1 HISTORY
Genetic algorithms (GAs) are a class of search algorithms designed on the natural evolution
process. Genetic Algorithms are based on the principles of survival of the fittest.
A Genetic Algorithm method inspired in the world of Biology, particularly, the Evolution
Theory by Charles Darwin, is taken as the basis of its working. John Holland introduced the
Genetic Algorithm in 1975. Genetic Algorithms are utilised to tackle optimization problems
by copying the evolutionary behaviour of species. From an initial random population of
solutions, this population is advanced through selection, mutation, and crossover operators,
inspired in natural evolution. By implementing the given set of operations, the population
goes through an iterative procedure in which it reaches various states, and each one is called
generation. As a result of this procedure, the population is expected to reach a generation in
which it contains a decent solution to the problem. In the Genetic Algorithm, the solution of
the problem is coded as a string of bits or real numbers.
They have been shown in practice to be very efficient at functional optimization. It is used in
searching for huge and sophisticated spaces. Genetic algorithms (GAs) are algorithms that
are used for optimization and machine learning based on various features of biological
evolution.
1.2 INTRODUCTION
Genetic Algorithm (GA) is a search-based optimization technique based on the principles of
Genetics and Natural Selection. It is frequently used to find optimal or near-optimal
solutions to difficult problems which otherwise would take a lifetime to solve. It is frequently
used to solve optimization problems, in research, and in machine learning.
Introduction to Optimization
Optimization is the process of making something better. In any process, we have a set of
inputs and a set of outputs as shown in the following figure.
Optimization refers to finding the values of inputs in such a way that we get the “best”
output values. The definition of “best” varies from problem to problem, but in mathematical
terms, it refers to maximising or minimising one or more objective functions, by varying the
input parameters.
The set of all possible solutions or values which the inputs can take make up the search
space. In this search space, lies a point or a set of points which gives the optimal solution.
The aim of optimization is to find that point or set of points in the search space.
What are Genetic Algorithms?
Nature has always been a great source of inspiration to all mankind. Genetic Algorithms
(GAs) are search based algorithms based on the concepts of natural selection and genetics.
GAs are a subset of a much larger branch of computation known as Evolutionary
Computation.
GAs were developed by John Holland and his students and colleagues at the University of
Michigan, most notably David E. Goldberg and has since been tried on various optimization
problems with a high degree of success.
In GAs, we have a pool or a population of possible solutions to the given problem. These
solutions then undergo recombination and mutation (like in natural genetics), producing
new children, and the process is repeated over various generations. Each individual (or
candidate solution) is assigned a fitness value (based on its objective function value) and the
fitter individuals are given a higher chance to mate and yield more “fitter” individuals. This is
in line with the Darwinian Theory of “Survival of the Fittest”.
In this way we keep “evolving” better individuals or solutions over generations, till we reach
a stopping criterion.
Genetic Algorithms are sufficiently randomised in nature, but they perform much better
than random local search (in which we just try various random solutions, keeping track of
the best so far), as they exploit historical information as well.
They need the given components:
○ A process of coding solutions to solve the problem of chromosomes.
○ An evaluation function that recovers a rating for each chromosome given to it.
○ Operators that may be implemented to parents when they reproduce to alter their
genetic composition. The standard operators are mutation and crossover.
○ Operators that may be implemented to parents when they reproduce to modify their
genetic composition. The standard operators are mutation and crossover.
Foundation of Genetic Algorithms
Genetic algorithms are based on an analogy with genetic structure and behaviour of
chromosomes of the population. Following is the foundation of GAs based on this analogy –
1. Individual in population compete for resources and mate
2. Those individuals who are successful (fittest) then mate to create more offspring
than others
3. Genes from “fittest” parents propagate throughout the generation, that is
sometimes parents create offspring which are better than either parent.
4. Thus each successive generation is more suited for their environment.
Search space
The population of individuals is maintained within search space. Each individual represents a
solution in search space for a given problem. Each individual is coded as a finite length
vector (analogous to chromosome) of components. These variable components are
analogous to Genes. Thus a chromosome (individual) is composed of several genes (variable
components).
Fitness Score
A Fitness Score is given to each individual which shows the ability of an individual to
“compete”. The individual having optimal fitness score (or near optimal) are sought.
The GAs maintains the population of n individuals (chromosome/solutions) along with their
fitness [Link] individuals having better fitness scores are given more chance to
reproduce than others. The individuals with better fitness scores are selected who mate and
produce better offspring by combining chromosomes of parents. The population size is
static so the room has to be created for new arrivals. So, some individuals die and get
replaced by new arrivals, eventually creating a new generation when all the mating
opportunities of the old population are exhausted. It is hoped that over successive
generations better solutions will arrive while least fit die.
Each new generation has on average more “better genes” than the individual (solution) of
previous generations. Thus each new generation has better “partial solutions” than
previous generations. Once the offspring produced have no significant difference from
offspring produced by previous populations, the population converges. The algorithm is said
to be converged to a set of solutions for the problem.
Operators of Genetic Algorithms
Once the initial generation is created, the algorithm evolves the generation using following
operators –
1) Selection Operator: The idea is to give preference to the individuals with good fitness
scores and allow them to pass their genes to successive generations.
2) Crossover Operator: This represents mating between individuals. Two individuals are
selected using a selection operator and crossover sites are chosen randomly. Then the genes
at these crossover sites are exchanged thus creating a completely new individual (offspring).
For example –
3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the
diversity in the population to avoid premature convergence. For example –
Advantages of GAs
GAs have various advantages which have made them immensely popular. These include −
1. Does not require any derivative information (which may not be available for many
real-world problems).
2. Is faster and more efficient as compared to the traditional methods.
3. Has very good parallel capabilities.
4. Optimises both continuous and discrete functions and also multi-objective problems.
5. Provides a list of “good” solutions and not just a single solution.
6. Always gets an answer to the problem, which gets better over time.
7. Useful when the search space is very large and there are a large number of
parameters involved.
Limitations of GAs
Like any technique, GAs also suffer from a few limitations. These include −
1. GAs are not suited for all problems, especially problems which are simple and for
which derivative information is available.
2. Fitness value is calculated repeatedly which might be computationally expensive for
some problems.
3. Being stochastic, there are no guarantees on the optimality or the quality of the
solution.
4. If not implemented properly, the GA may not converge to the optimal solution.
Application of Genetic Algorithms
Genetic algorithms have many applications, some of them are –
● Recurrent Neural Network
● Mutation testing
● Code breaking
● Filtering and signal processing
● Learning fuzzy rule base etc
GA – Motivation
Genetic Algorithms have the ability to deliver a “good-enough” solution “fast-enough”. This
makes genetic algorithms attractive for use in solving optimization problems. The reasons
why GAs are needed are as follows −
Solving Difficult Problems
In computer science, there is a large set of problems, which are NP-Hard. What this
essentially means is that, even the most powerful computing systems take a very long time
(even years!) to solve that problem. In such a scenario, GAs prove to be an efficient tool to
provide usable near-optimal solutions in a short amount of time.
Failure of Gradient Based Methods
Traditional calculus based methods work by starting at a random point and by moving in the
direction of the gradient, till we reach the top of the hill. This technique is efficient and
works very well for single-peaked objective functions like the cost function in linear
regression. But, in most real-world situations, we have a very complex problem called as
landscapes, which are made of many peaks and many valleys, which causes such methods to
fail, as they suffer from an inherent tendency of getting stuck at the local optima as shown in
the following figure.
Getting a Good Solution Fast
Some difficult problems like the Travelling Salesperson Problem (TSP), have real-world
applications like path finding and VLSI Design. Now imagine that you are using your GPS
Navigation system, and it takes a few minutes (or even a few hours) to compute the
“optimal” path from the source to destination. Delay in such real world applications is not
acceptable and therefore a “good-enough” solution, which is delivered “fast” is what is
required.
Difference between traditional and genetic approach:
An algorithm is a progression of steps for solving a problem. A genetic algorithm is a
problem-solving technique that uses genetics as its model of problem-solving. It is a search
method to find approximate solutions to optimization and search issues. One can easily
distinguish between a traditional and a genetic algorithm.
Traditional Algorithm Genetic Algorithm
It selects the next point in the series by a It selects the next population by
deterministic computation. computation, which utilises random number
generators.
It creates an individual point at each It creates a population of points at every
iteration. The sequence of points iteration. The best point in the population
approaches an optimal solution. approaches an optimal solution.
Advancement in each iteration is Concurrence in each iteration is a problem
problem specific. independent.
2. WORKING PRINCIPLE
The working principle of a standard Genetic Algorithm is illustrated in the given figure. The
significant steps involved are the generation of a population of the solution, identifying the
objective function and fitness function, and the application of genetic operators. These
aspects are described with the assistance of a fundamental genetic algorithm as below.
Start:
It generates a random population of n chromosomes.
Fitness:
It calculates the fitness f(x) of each chromosome x in the population.
New Population:
It generates a new population by repeating the following steps until the New population is
finished.
Selection:
It chooses two parent chromosomes from a population as per their fitness. The better
fitness, the higher the probability of getting selected.
Crossover:
In crossover probability, cross over the parents to form new offspring (children). If no
crossover was performed, the offspring is the exact copy of the parents.
Mutation:
In mutation probability, mutate new offspring at each locus.
Accepting:
It places new offspring in the new population.
Replace:
It uses the newly generated population for a further run of the algorithm.
Test:
If the end condition is satisfied, then it stops and returns the best solution in the current
population.
Loop:
In this step, you need to go to the second step for fitness evaluation.
The basic principle behind the genetic algorithms is that they generate and maintain a
population of individuals represented by chromosomes. Chromosomes are a character string
practically equivalent to the chromosomes appearing in DNA. These chromosomes are
usually encoded solutions to a problem. It undergoes a process of evolution as per rules of
selection, reproduction, and mutation. Each individual in the environment (represented by
chromosome) gets a measure of its fitness in the environment. Reproduction chooses
individuals with high fitness values in the population. Through crossover and mutation of
such individuals, a new population is determined in which individuals might be an even
better fit for their environment. The process of crossover includes two chromosomes
swapping chunks of data and is analogous to the process of reproduction. Mutation
introduces slight changes into a little extent of the population, and it is representative of an
evolutionary step.
3. VARIOUS ENCODING METHODS
Encoding of chromosomes is the first step in solving the problem and it depends entirely on
the problem heavily. The process of representing the solution in the form of a string of bits
that conveys the necessary information. Just as in a chromosome, each gene controls
particular characteristics of the individual; similarly, each bit in the string represents
characteristics of the solution.
Encoding Methods :
● Binary Encoding: Most common methods of encoding. Chromosomes are strings
of 1s and 0s and each position in the chromosome represents a particular
characteristic of the solution.
Example of chromosomes with binary encoding
Binary encoding gives many possible chromosomes even with a small number of
alleles. On the other hand, this encoding is often not natural for many problems and
sometimes corrections must be made after crossover and/or mutation.
Example of Problem: Knapsack problem
The problem: There are things with given value and size. The knapsack has a given
capacity. Select things to maximise the value of things in knapsack, but do not extend
knapsack capacity.
Encoding: Each bit says, if the corresponding thing is in the knapsack.
● Permutation Encoding: Useful in ordering such as the Travelling Salesman
Problem (TSP). In TSP, every chromosome is a string of numbers, each of which
represents a city to be visited.
Example of chromosomes with permutation encoding
Permutation encoding is only useful for ordering problems. Even for these problems
some types of crossover and mutation corrections must be made to leave the
chromosome consistent (i.e. have real sequence in it).
Example of Problem: Travelling salesman problem (TSP)
The problem: There are cities and given distances between [Link]
salesman has to visit all of them, but he does not travel very much. Find a
sequence of cities to minimise travelled distance.
Encoding: Chromosome says order of cities, in which salesman will visit
them.
● Value Encoding: Used in problems where complicated values, such as real
numbers, are used and where binary encoding would not suffice. Good for some
problems, but often necessary to develop some specific crossover and mutation
techniques for these chromosomes.
Example of chromosomes with value encoding
Value encoding is very good for some special problems. On the other hand, for this
encoding is often necessary to develop some new crossover and mutation specific for
the problem.
Example of Problem: Finding weights for neural network
The problem: There is some neural network with given architecture. Find weights for
inputs of neurons to train the network for wanted output.
Encoding: Real values in chromosomes represent corresponding weights for inputs.
● Tree Encoding
Tree encoding is used mainly for evolving programs or expressions, for genetic
programming.
In tree encoding every chromosome is a tree of some objects, such as functions or
commands in programming language.
Chromosome A Chromosome B
(+ x (/ 5 y)) ( do_until step wall )
Example of chromosomes with tree encoding
Tree encoding is good for evolving programs. Programming language LISP is often used
for this, because programs in it are represented in this form and can be easily parsed as
a tree, so the crossover and mutation can be done relatively easily.
Example of Problem: Finding a function from given values
The problem: Some input and output values are given. Task is to find a function, which
will give the best (closest to wanted) output to all inputs.
Encoding: Chromosomes are functions represented in a tree.
4. FITNESS FUNCTION
What is a Fitness Function?
Fitness Function (also known as the Evaluation Function) evaluates how close a given
solution is to the optimum solution of the desired problem. It determines how fit a solution
is.
Why do we use Fitness Functions?
In genetic algorithms, each solution is generally represented as a string of binary numbers,
known as a chromosome. We have to test these solutions and come up with the best set of
solutions to solve a given problem. Each solution, therefore, needs to be awarded a score, to
indicate how close it came to meeting the overall specification of the desired solution. This
score is generated by applying the fitness function to the test, or results obtained from the
tested solution.
Generic Requirements of a Fitness Function
The following requirements should be satisfied by any fitness function.
1. The fitness function should be clearly defined. The reader should be able to
clearly understand how the fitness score is calculated.
2. The fitness function should be implemented efficiently. If the fitness function
becomes the bottleneck of the algorithm, then the overall efficiency of the
genetic algorithm will be reduced.
3. The fitness function should quantitatively measure how fit a given solution is in
solving the problem.
4. The fitness function should generate intuitive results. The best/worst candidates
should have best/worst score values.
How to come up with a Fitness Function for a given Problem?
Each problem has its own fitness function. The fitness function that should be used depends
on the given problem. Coming up with a fitness function for the given problem is the hardest
part when it comes to formulating a problem using genetic algorithms.
There is no hard and fast rule that a particular function should be used in a particular
problem. However, certain functions have been adopted by data scientists regarding certain
types of problems.
Typically, for classification tasks where supervised learning is used, error measures such as
Euclidean distance and Manhattan distance have been widely used as the fitness function.
For optimization problems, basic functions such as the sum of a set of calculated parameters
related to the problem domain can be used as the fitness function.
Let’s go through a few example problems and their related fitness functions.
Example 1— Generating Sequences
Consider the example given below. I have used this simple example mostly for the ease of
understanding.
Given a set of 5 genes, which can hold one of the binary values 0 and 1, we have to come up
with the sequence having all 1s. So we have to maximise the number of 1s as much as
possible. This can be considered as an optimization problem. Hence, the fitness function is
considered as the number of 1s present in the genome. If there are five 1s, then it is having
maximum fitness and solves our problem. If there are no 1s, then it has the minimum
fitness.
Example 2— Timetable Scheduling
Timetable for a week
A very famous scenario where genetic algorithms can be used is the process of making
timetables or timetable scheduling.
Consider you are trying to come up with a weekly timetable for classes in a college for a
particular batch. We have to arrange classes and come up with a timetable so that there are
no clashes between classes. Here, our task is to search for the optimum timetable schedule.
Since there should be no collisions among classes we should minimise the number of
students having class conflicts. You can formulate the fitness function as the inverse of the
number of students with class conflicts. Lesser the number of students with class conflicts,
more fit the class is.
Example 3— Finding values for a set of variables which satisfy a given constraint
Consider three variables x, y and z. The problem is to find the best set of values for x, y and z
so that their total value is equal to a value t.
x+y+z=t
We have to reduce the sum x+y+z from deviating from t, i.e. |x + y + z — t| should be zero.
Hence the fitness function can be considered as the inverse of |x + y + z - t|.
Fitness function = 1/|x + y + z - t|
These are a few examples of applications where genetic algorithms are used and how to
come up with their fitness functions. I have used these simple examples for the ease of
understanding. When modelling complex real world problems, we may not get such simple
fitness functions.
Final Note
The fitness function has to be something that measures how good your solution is. In
particular, it should be able to handle whatever the available solutions generated and has to
show the right way to improve them.
For example, a fitness function that is zero unless the answer is right is not good, because it
doesn’t help you get an idea of how close the solution is to the right answer. Also, a fitness
function that increases as solutions get better, but doesn’t identify the best solution is not so
good either, because your population will improve up to a certain point and then get stuck.
You have to play around with the problem, look in different ways and think about what kind
of function you can use to check how good your solution is. You want a function that gives
low values for bad solutions and high values for good solutions. With practice, you will
eventually get better at defining a fitness function for a given problem.
5. GA OPERATORS
Genetic algorithms (GA) use several operators to simulate the mechanisms of natural
evolution and search for optimal or near-optimal solutions. These operators include
selection, crossover, and mutation. Let's take a closer look at each of these operators:
1. Selection:
- Purpose: The selection operator determines which individuals from the current
population are chosen as parents for reproduction in the next generation.
- Mechanism: Selection is typically based on the fitness values of individuals, where fitter
individuals have a higher probability of being selected. Various selection methods can be
used, such as roulette wheel selection, tournament selection, or rank-based selection.
- Importance: Selection ensures that individuals with better fitness have a higher chance of
passing their genetic material to the next generation, mimicking the survival of the fittest
principle in natural evolution.
2. Crossover:
- Purpose: The crossover operator emulates the genetic recombination process by
combining genetic material from two or more selected parents to create new offspring.
- Mechanism: Crossover involves selecting specific points or positions (crossover points) in
the genetic material (chromosomes) of parents. The genetic material beyond these points is
exchanged or swapped between parents to create new individuals. The crossover can be
single-point, multi-point, or uniform, depending on the problem and encoding scheme.
- Importance: Crossover promotes the exchange of genetic information between
individuals, facilitating the exploration of different combinations of genetic material and
potentially creating offspring with better solutions than their parents.
3. Mutation:
- Purpose: The mutation operator introduces random changes or perturbations in the
genetic material to maintain genetic diversity and prevent premature convergence.
- Mechanism: Mutation randomly alters one or more genes in an individual's chromosome.
The alteration can involve flipping a bit, changing a value, swapping positions, or
adding/subtracting a small random value, depending on the encoding scheme. The mutation
rate determines the probability of each gene undergoing a mutation.
- Importance: Mutation allows for exploration by introducing random variations, enabling
the search to move to new areas of the search space and potentially finding better solutions
that would not be reachable through selection and crossover alone.
4. Elitism:
- Purpose: Elitism is not a specific operator but a strategy that preserves the best
individuals from the current generation and directly carries them over to the next
generation.
- Mechanism: The top-performing individuals with the highest fitness values are selected
as elite individuals and are automatically included in the next generation without any
changes.
- Importance: Elitism helps maintain the best solutions found so far in the population and
prevents the loss of valuable genetic material. By preserving the best individuals, elitism
ensures that progress is made in optimising the objective function.
These operators work together in a cyclic manner, starting with selection, followed by
crossover and mutation, and possibly including elitism, to create the next generation of
individuals. This process continues iteratively until a termination criterion is met, such as
reaching a maximum number of generations or achieving satisfactory solutions.
The specific implementation and variations of these operators can differ based on the
problem domain, encoding scheme, and specific requirements of the optimization problem.
The choice of operators and their parameters is crucial for the performance and
effectiveness of the genetic algorithm in finding good solutions.
5.1 REPRODUCTION
After the selection process, the creation of a child occurs in the reproduction step. In this
step, the genetic algorithm uses two variation operators that are applied to the parent
population. The two operators involved in the reproduction phase are given below:
5.2 CROSSOVER
Crossover is the process of taking two parent solutions and producing from them a child.
After the selection (reproduction) process, the population is enriched with better
individuals. Crossover operator is applied to the mating pool with the hope that it creates a
better offspring.
In this process, a crossover point is selected at random within the genes. Then the crossover
operator swaps genetic information of two parents from the current generation to produce a
new individual representing the offspring.
The genes of parents are exchanged among themselves until the crossover point is met.
These newly generated offspring are added to the population. This process is also called
crossover. Types of crossover styles available:
○ One point crossover
○ Two-point crossover
i).Single-Point Crossover-Here the two mating chromosomes are cut once at corresponding
points and the sections after the cuts exchanged.
ii). Two-Point Crossover-Here two crossover points are chosen and the contents between
these points are exchanged between two mated parents.
5.3 MUTATION
After crossover, the strings are subjected to mutation. Mutation prevents the algorithm to
be trapped in a local minimum. It plays the role of recovering the genetic materials as well as
for randomly distributing genetic information. It helps escape from local minima’s trap and
maintain diversity in the population. Mutation of a bit involves flipping a bit, changing 0 to 1
and vice-versa.
The mutation operator inserts random genes in the offspring (new child) to maintain the
diversity in the population. It can be done by flipping some bits in the chromosomes.
Mutation helps in solving the issue of premature convergence and enhances diversification.
The below image shows the mutation process:
Types of mutation styles available,
○ Flip bit mutation
○ Gaussian mutation
○ Exchange/Swap mutation
1. Bit Flip Mutation: This type of mutation is commonly used in binary-encoded
chromosomes. It involves flipping (changing) the value of one or more randomly selected
bits in the chromosome. For example, if a bit is initially 0, it will be flipped to 1, and vice
versa.
2. Swap Mutation: Swap mutation is typically used in permutation-based or integer-encoded
chromosomes. It involves swapping the positions of two randomly selected genes or
elements within the chromosome. This type of mutation allows for the exchange of genetic
material and exploration of different permutations.
3. Gaussian Mutation: Gaussian mutation is often used in real-valued or
continuous-encoded chromosomes. It involves perturbing the value of each gene by adding
a small random value drawn from a Gaussian distribution. The magnitude of the
perturbation is typically controlled by a mutation rate and a standard deviation parameter.
6. CONVERGENCE OF GA
Genetic algorithms are probabilistic search optimization techniques, which operate on a
population of chromosomes, representing potential solutions to the given problem.
In a standard genetic algorithm, binary strings of 1s and 0s represent the chromosomes.
Each chromosome is assigned a fitness value expressing its quality reflecting the given
objective function. Such a population is evolved by means of reproduction and
recombination operators in order to breed the optimal solution’s chromosome. The
evolution keeps running until some termination condition is fulfilled. The best chromosome
encountered so far is then considered as the found solution.
Genetic algorithms simultaneously carry out exploitation of the promising regions found so
far and exploration of other areas for potentially better solutions.
The weak point of a genetic algorithm is that it often suffers from so-called premature
convergence, which is caused by an early homogenization of genetic material in the
population. This means that no valuable exploration can be performed anymore.
Schematic Diagram
Details:
● Convergence is a phenomenon in evolutionary computation that causes
evolution to halt because precisely every individual in the population is identical.
● Full Convergence might be seen in genetic algorithms using only cross-over.
● Premature convergence is when a population has converged to a single solution,
but that solution is not as high of quality as expected, i.e. the population has
gotten stuck.
● However, convergence is not necessarily a negative phenomenon, because
populations often stabilise after a time, in the sense that the best programs all
have a common ancestor and their behaviour is very similar/identical both to
each other and to that of high fitness programs from the previous generations.
● Convergence can be avoided with a variety of diversity generating techniques.
Here are some key aspects related to the convergence of GA:
1. Population Stability: As the GA progresses through generations, the population should
stabilise, meaning that the individuals become similar in terms of their genetic material and
fitness values. This indicates that the algorithm is converging towards a solution.
2. Fitness Improvement: The fitness values of the individuals in the population should
improve over generations. Initially, the population may consist of individuals with low
fitness, but as the GA evolves, the fitness should increase, indicating progress towards a
better solution.
3. Convergence Criteria: Convergence criteria are conditions that determine when the GA
can stop iterating. These criteria can be based on various factors, such as a maximum
number of generations, a satisfactory fitness threshold, or a small improvement in fitness
over a certain number of generations. When the convergence criteria are met, the GA
terminates, assuming that a satisfactory solution has been obtained.
4. Diversity Maintenance: While convergence is desirable for finding good solutions, it is
essential to maintain diversity in the population to prevent premature convergence.
Premature convergence occurs when the GA gets trapped in a suboptimal solution and fails
to explore other regions of the search space. To avoid premature convergence, techniques
such as diversity-preserving selection, adaptive mutation rates, or multiple runs with
different parameter settings can be employed.
5. Parameter Tuning: The convergence of a GA can be influenced by various parameters,
such as population size, mutation rate, selection strategy, and crossover operator. Proper
parameter tuning is crucial to ensure a balance between exploration and exploitation.
Adjusting the parameters can help control the convergence speed and the quality of the
obtained solutions.
It's important to note that the convergence of a GA is problem-specific and can vary based
on the complexity of the optimization problem, the representation and encoding of the
genetic material, the fitness landscape, and the effectiveness of the genetic operators. Some
problems may require longer computation times or more sophisticated techniques to
achieve convergence to satisfactory solutions.
Monitoring and analysing convergence is an important part of using GA effectively. Tracking
convergence can be done by observing the population statistics, analysing fitness trends
over generations, and visualising the search progress. These measures can provide insights
into the convergence behaviour and guide adjustments to the algorithm or parameters if
necessary.
Overall, achieving convergence in a GA involves balancing exploration and exploitation,
maintaining diversity, and carefully tuning the parameters to find optimal or near-optimal
solutions to the given optimization problem.
7. BITWISE OPERATION IN GA
In genetic algorithms (GA), bitwise operations can be used for various purposes, particularly
when dealing with binary representations of individuals or genes.
Bitwise operations manipulate individual bits in binary numbers. Here are some commonly
used bitwise operations in GA:
1. Bitwise AND (&): Performs a logical AND operation between two binary numbers,
comparing the corresponding bits. If both bits are 1, the result is 1; otherwise, it is 0.
2. Bitwise OR (|): Performs a logical OR operation between two binary numbers, comparing
the corresponding bits. If at least one of the bits is 1, the result is 1; otherwise, it is 0.
3. Bitwise XOR (^): Performs a logical XOR (exclusive OR) operation between two binary
numbers, comparing the corresponding bits. If the bits are different, the result is 1; if they
are the same, the result is 0.
4. Bitwise NOT (~): Performs a logical negation operation on a binary number, flipping all the
bits. If a bit is 0, it becomes 1, and if it is 1, it becomes 0.
These bitwise operations can be used in GA for various purposes, such as crossover and
mutation operations. For example, during crossover, you can combine the binary
representations of two parent individuals using bitwise operations to create offspring
individuals. Similarly, during mutation, you can flip individual bits using bitwise operations to
introduce small changes in the binary representation.
It's worth noting that the specific use of bitwise operations in GA can vary depending on the
problem domain and the encoding scheme chosen for representing individuals. The bitwise
operations mentioned above are just some commonly used ones, but there can be
additional operations or variations based on specific requirements.
8. MULTI-LEVEL OPTIMIZATION
Multi-level optimization in genetic algorithms (GA) refers to the process of simultaneously
optimising multiple levels or hierarchies of variables or objectives within a single GA
framework. It is often used when dealing with complex optimization problems that have
multiple levels of decision-making or multiple conflicting objectives.
In traditional single-level optimization, the objective function is defined based on a set of
decision variables, and the GA aims to find the optimal values for those variables. However,
in multi-level optimization, there are additional levels or layers of decision variables and
objectives that need to be considered.
There are two main approaches to multi-level optimization in GA:
1. Single Population Approach: In this approach, a single population of individuals is evolved
to simultaneously optimise all levels. Each individual in the population represents a solution
with multiple levels of decision variables. The GA operators such as selection, crossover, and
mutation are modified to handle the multi-level structure. Fitness evaluation considers all
levels of objectives and constraints. This approach allows for interactions and trade-offs
between different levels but can be computationally expensive due to the increased
complexity.
2. Multiple Population Approach: In this approach, separate populations are maintained for
each level of optimization. Each population focuses on optimising a specific level of decision
variables and objectives. The populations may interact with each other through migration or
information exchange mechanisms. This approach allows for parallel processing and can be
computationally efficient. However, it may not capture the full interactions and trade-offs
between different levels as effectively as the single population approach.
The choice between these approaches depends on the problem characteristics, the
complexity of the optimization levels, and the available computational resources.
To implement multi-level optimization in GA, the following steps are typically involved:
1. Define the hierarchical structure: Identify the different levels of decision variables and
objectives that need to be optimised.
2. Define the encoding scheme: Determine how the hierarchical structure will be encoded
in the GA representation. This could involve using different segments or chromosomes to
represent different levels.
3. Modify the GA operators: Adapt the selection, crossover, and mutation operators to
handle the multi-level structure. The operators should consider the constraints and
objectives at each level appropriately.
4. Evaluate fitness: Develop a fitness evaluation procedure that considers all levels of
objectives and constraints. The fitness function should capture the trade-offs and
interactions between different levels.
5. Run the GA: Execute the GA algorithm, evolving the populations and optimising the
decision variables and objectives at each level.
6. Termination criteria: Define the termination criteria for the GA, such as a maximum
number of generations or a convergence threshold.
By incorporating multi-level optimization in GA, it becomes possible to handle complex
problems with multiple levels of decision-making or conflicting objectives, leading to more
robust and comprehensive solutions.
9. GENETIC ALGORITHM BASED BACK PROPAGATION NETWORK
Genetic algorithms (GA) and backpropagation networks (also known as artificial neural
networks) are two distinct techniques used in machine learning. While GA is an evolutionary
optimization algorithm, backpropagation is a method for training neural networks.
However, it is possible to combine these techniques to create a hybrid approach that
leverages the strengths of both methods. This hybrid approach is called a Genetic
Algorithm-based Backpropagation Network (GABP).
Backpropagation is a supervised learning algorithm commonly used to train artificial neural
networks. It involves calculating the gradient of the network’s loss function with respect to
the weights and biases of the network, and then updating these parameters using gradient
descent or a similar optimization algorithm. Back Propagation requires a known target
output for each input, and it iteratively adjusts the network’s parameters to minimise the
difference between the predicted output and the target output.
On the other hand, GA is a class of optimization algorithms inspired by the process of natural
selection. They use techniques such as selection, crossover , and mutation to evolve a
population of candidate solutions towards an optimal solution. In the context of neural
networks, a Genetic Algorithm can be used to search for an optimal set of weights and
biases that minimise the network’s error or maximise its performance.
Genetic Algorithm is the initial step, which involves the generation , evaluation, selection ,
crossover and mutation of candidate solutions. THe output of the GA, which represents the
optimal weights and biases, is then passed to the Backpropagation Algorithm .
BPN takes the optimised weights and biases and uses them to train the neural networks. It
performs the forward pass to calculate the predicted output, computes the loss function by
comparing the predicted output with the target output, and then performs the backward
pass to calculate the gradients. The gradients are used to update the weights and biases
through an optimization algorithm such as gradient descent. THis process is repeated
iteratively until the network’s performance reaches a satisfactory level.
In a GABP, the genetic algorithm is used to optimise the architecture and parameters of the
neural network.
Here's a high-level overview of how a GABP works:
1. Encoding: Each individual in the GA population represents a set of weights and biases of
the neural network. The genetic encoding scheme defines how the neural network
parameters are encoded as chromosomes.
2. Initialization: Initialise a population of individuals with random weights and biases for the
neural network.
3. Evaluation: Evaluate the fitness of each individual in the population by training and
testing the neural network using the backpropagation algorithm. The fitness function
typically measures the performance of the neural network on a specific task, such as
classification accuracy or mean squared error.
4. Selection: Select individuals from the population based on their fitness scores to form the
parent pool for reproduction.
5. Crossover and Mutation: Apply genetic operators, such as crossover and mutation, to the
selected individuals to create new offspring. Crossover involves combining the genetic
material of two parents to create a child, while mutation introduces small random changes
in the offspring's genetic material.
6. Evaluation and Selection: Evaluate the fitness of the offspring by training and testing the
neural network using backpropagation. Select individuals from the parent population and
the offspring population to form the next generation based on their fitness scores.
7. Repeat: Repeat steps 5 and 6 for multiple generations until a termination condition is
met, such as reaching a maximum number of generations or achieving satisfactory
performance.
By iteratively applying the genetic operators and backpropagation training, the GABP aims to
evolve a neural network architecture and parameters that perform well on the given task.
The genetic algorithm explores the search space to find promising solutions, while
backpropagation fine-tunes the neural network weights based on error gradients.
The GABP approach allows for the automatic optimization of neural network architectures,
such as the number of layers, the number of neurons per layer, and the connectivity
patterns, along with the associated weights and biases. This hybrid approach can be
particularly useful in situations where manually designing the neural network architecture is
challenging or time-consuming.
It's worth noting that there are various variations and extensions of the GABP approach,
each with its own specific implementation details and optimizations. The exact
implementation can vary based on the problem at hand and the specific requirements of the
application.
10. GA BASED WEIGHT DETERMINATION
Genetic algorithms (GA) can be used for weight determination in various machine learning
algorithms, particularly those that involve optimization problems. Weight determination
refers to finding the optimal values for the weights of a model or algorithm to achieve the
desired performance.
Here's a general outline of how GA can be used for weight determination:
1. Encoding: The weights of the model are encoded into chromosomes of individuals in the
GA population. The encoding scheme depends on the specific problem and the
representation of weights.
2. Initialization: Initialise a population of individuals with random weight values or use a
specific initialization strategy that takes into account prior knowledge or insights.
3. Fitness Evaluation: Evaluate the fitness of each individual in the population based on how
well their weights perform on a given task or objective. The fitness function can be
problem-specific, such as classification accuracy or mean squared error.
4. Selection: Select individuals from the population based on their fitness scores to form the
parent pool for reproduction. The selection process is typically biassed towards individuals
with higher fitness, as they have a higher chance of being selected.
5. Crossover: Apply crossover operators to the selected individuals to create new offspring.
Crossover involves exchanging genetic material (weights) between two or more parents to
create new combinations. The specific crossover operation depends on the encoding
scheme used for representing the weights.
6. Mutation: Introduce random changes to the weights of the offspring individuals to
maintain genetic diversity and explore new regions of the weight space. Mutation helps
prevent premature convergence and can lead to better solutions by introducing novel
variations.
7. Fitness Evaluation and Selection: Evaluate the fitness of the offspring individuals based
on the task or objective and select individuals for the next generation based on their fitness
scores.
8. Repeat: Repeat steps 5-7 for multiple generations until a termination condition is met.
Termination conditions can include reaching a maximum number of generations,
convergence to a satisfactory solution, or a specific improvement threshold.
Throughout the iterations, the GA explores different combinations of weights and gradually
improves the overall fitness of the population. The process allows for the identification of
weight configurations that optimise the desired performance metric.
It's important to note that the specific implementation details of weight determination using
GA can vary depending on the problem, the optimization algorithm being used, and the
encoding scheme chosen. Proper parameter tuning, such as population size, crossover and
mutation rates, and termination criteria, is essential for the success of the GA-based weight
determination process. Additionally, the performance of GA for weight determination can be
influenced by the complexity of the problem, the size of the weight space, and the quality of
the fitness evaluation process.
11. K-FACTOR DETERMINATION IN COLUMNS
The determination of the K-factor in columns using a genetic algorithm (GA) involves
formulating the problem as an optimization task and applying the principles of GA to find
the optimal value of the K-factor.
Here's a general outline of how a genetic algorithm can be used for K-factor determination
in columns:
1. Define the Problem: Clearly define the objective function and constraints for the
optimization problem. The objective function should capture the desired performance
criteria, such as minimising the column deflection or maximising the column's load-carrying
capacity.
2. Encoding: Encode the potential values of the K-factor into a chromosome representation.
The encoding scheme depends on the range and granularity of the K-factor values
considered. For example, if the K-factor can take discrete values, a binary or integer
encoding can be used.
3. Initialization: Initialise a population of individuals, where each individual represents a
potential solution with a specific K-factor value. The initial population can be generated
randomly or using a specific initialization strategy based on prior knowledge or insights.
4. Fitness Evaluation: Evaluate the fitness of each individual in the population based on their
K-factor value. The fitness function should be based on the objective function defined in
step 1. It can be formulated to reflect the performance criteria of the column, such as the
column's stability, deflection, or load-carrying capacity.
5. Selection: Select individuals from the population for reproduction based on their fitness
scores. Higher fitness individuals should have a higher probability of being selected, as they
represent better solutions.
6. Crossover and Mutation: Apply crossover and mutation operators to the selected
individuals to create new offspring. Crossover involves combining genetic material (K-factor
values) from two or more parents, while mutation introduces random changes to the
K-factor values to maintain diversity and explore new solutions.
7. Fitness Evaluation and Selection: Evaluate the fitness of the offspring individuals based
on the objective function and select individuals for the next generation based on their
fitness scores.
8. Repeat: Repeat steps 6 and 7 for multiple generations until a termination condition is
met. Termination conditions can include reaching a maximum number of generations,
convergence to a satisfactory solution, or a specific improvement threshold.
Throughout the iterations, the GA explores different combinations of K-factor values and
gradually improves the overall fitness of the population. The process allows for the
identification of K-factor values that optimise the desired performance criteria.
It's important to note that the success of using a genetic algorithm for K-factor
determination in columns depends on factors such as the formulation of the objective
function, the representation and encoding of the K-factor values, the selection and variation
operators used, and the size and diversity of the population. Proper parameter tuning and a
suitable fitness evaluation process are crucial for the effectiveness of the GA-based
approach.
Additionally, it's essential to consider the constraints and limitations of using a genetic
algorithm for K-factor determination, such as the computational complexity and the need for
a well-defined objective function. Professional structural engineering expertise and
knowledge are still required to ensure the accuracy and safety of the column design.