The genetic algorithm is a popular evolutionary algorithm. It uses Darwin’s theory of natural evolution to solve complex problems in computer science. But, to do so, the algorithm’s parameters need a bit of adjusting.
One of the key parameters is mutation. It makes random changes in the chromosomes (i.e. solutions) in order to increase quality (i.e. fitness). The mutation is applied to a fixed number of genes. Traditionally, the same number of genes are mutated across all chromosomes, regardless of their fitness.
In this tutorial, we’ll see why mutation with a fixed number of genes is bad, and how to replace it with adaptive mutation. Using the PyGAD Python 3 library, we’ll discuss a few examples that use both random and adaptive mutation.
Genetic algorithm quick overview
The genetic algorithm is a population-based evolutionary algorithm, where a group of solutions works together to find the optimal parameters for a problem. The below figure, from this book, summarizes all the steps in the genetic algorithm.
The population of solutions is initialized randomly, where each solution consists of a number of genes. The quality of solutions is assessed using a fitness function, which returns a numeric value representing how fit the solution is.

The high-quality (high-fitness) solutions survive longer than the ones with low fitness. The higher the fitness, the higher probability of selecting the solution as a parent to produce new offspring. To produce the offspring, pairs of parents mate using the crossover operation, where a new solution is generated that carries genes from its parents.
After crossover, mutation is applied to add some random changes over the solution. The evolution continues through a number of generations to reach the highest-quality solution.
For more information about the genetic algorithm, read this article: Introduction to Optimization with Genetic Algorithm.
Even though the same steps are applied to all types of problems, you still need to select appropriate parameters to fit different problems. Some of these parameters include:
- The number of solutions in the population,
- Parent selection type,
- Crossover operator type,
- Mutation operator type,
- Crossover probability,
- Mutation probability,
- Fitness function.
For example, there are different types of parent selection, like rank and roulette wheel, and you should know which one to use when designing the algorithm for a specific problem.
The parameter we’ll be focusing on is mutation probability. So, let’s review the mutation operation, and whether high or low mutation probability is better.
How mutation works
Given two parents to mate, the first operation in the mating process is the crossover. The produced child just transfers some genes from its two parents. There’s nothing new in the child, as all of its genes are already existing in its parents. The next figure shows how crossover works.

If there are some bad genes within the parents, they will definitely be transferred to their children after crossover. The mutation operation plays a crucial role in fixing this issue.
During mutation, some genes are randomly selected from each child where some random changes are applied. Genes are selected based on a random probability for each gene. If the probability of mutating a gene is smaller than or equal to a predefined threshold, then this gene will be selected for mutation. Otherwise, it will be skipped. We’ll discuss mutation probability later on.
Let’s assume there are 4 genes in the solution, as in the next figure, where only the last gene is selected for mutation. A random change is applied to change its old value 2 and the new value is 4.

After briefly reviewing how random mutation works, next we’ll solve a problem using the genetic algorithm with random mutation.
Random mutation example
In this tutorial, we’ll be using an open-source Python 3 library called PyGAD, which offers a simple interface to solve problems using the genetic algorithm. For more information, please check the documentation. The source code is available at github.com/ahmedfgad.
Install PyGAD through pip as follows:
pip install pygad
Because PyGAD uses Python 3, use pip3 instead of pip for Linux and Mac.
After installing, let’s use it to optimize a simple linear equation with 4 inputs and 1 output.
Y = w1X1 + w2X2 + w3X3 + w4X4
We want to get the values of w1 to w4 to make the following equation hold:
Y = w1(4) + w2(-2) + w3(3.5) + w4(5)
Here is the Python code to solve this problem:
import pygad
import numpy
function_inputs = [4,-2,3.5,5]
desired_output = 44
def fitness_func(solution, solution_idx):
output = numpy.sum(solution*function_inputs)
fitness = 1.0 / (numpy.abs(output - desired_output) + 0.000001)
return fitness
ga_instance = pygad.GA(num_generations=100,
sol_per_pop=5,
num_genes=4,
num_parents_mating=2,
fitness_func=fitness_func,
mutation_type="random")
ga_instance.run()
ga_instance.plot_result()
There are 3 steps to follow:
- Build the fitness function which is a regular Python function (maximization function).
- Create an instance of the pygad.GA class.
- Call the run() method.
The fitness function is named fitness_func() and it must accept 2 parameters. This function should return a number representing the solution’s fitness. The higher this value, the better the solution.
In the instance of the pygad.GA class, the following arguments are used:
num_generations=100
: Number of generations.sol_per_pop=5
: Population size.num_genes=4
: Number of genes.num_parents_mating=2
: Number of parents to mate.fitness_func=fitness_func
: Fitness function.mutation_type="random"
: Type of mutation operation which already defaults to random.
To run the genetic algorithm, just call the run() method. After it completes, the plot_result() method can be called to show a plot summarizing the fitness values of the best solutions across all generations.

After the 100 generations are complete, some information about the best solution found by the genetic algorithm can be returned using best_solution().
The best solution has a fitness value of 761.4506452116121, and here are the values of w1 to w4:
w1=2.26799886
w2=-0.86295921
w3=4.85859239
w4=3.2391401
Next, let’s discuss the importance of mutation probability.
Constant mutation probability
The mutation operation is essential in the genetic algorithm. It changes the values of some genes to increase the quality of new children. To decide whether a gene is mutated or not, mutation probability is used.
In the traditional genetic algorithm, there’s only a single constant value for mutation probability. So, regardless of the fitness value of the solution, the same number of genes are mutated.
Being constant, a reasonable value for the mutation probability must be used for each problem. If the probability is high, then many genes will be mutated. If too many genes are mutated in a high-quality solution, the random changes might make this solution worse by disrupting too many of its genes. For a low-quality solution, mutating a high number of genes is beneficial to increase its quality, as it changes many of its bad genes.
On the other side, a small mutation probability causes just a few genes to be mutated. For a high-quality solution, randomly changing only some of its genes will keep its quality high, with the possibility of increasing it. For a low-quality solution, a small number of its genes are changed, so quality might still be low.
The next figure summarizes the previous discussion about using a constant mutation probability:
- A small mutation probability is good for high-quality solutions, but bad for low-quality ones.
- A high mutation probability is good for low-quality solutions, but bad for high-quality ones.

Next, we’ll edit the previous Python example to feed a constant mutation probability, using the mutation_probability argument.
Constant mutation probability Python example
PyGAD offers an argument named mutation_probability in the constructor of the pygad.GA class to feed a constant mutation probability, used across all solutions regardless of their fitness (i.e. quality).The constant probability is 0.6, which means if the random probability of the gene is <=0.6, it’s mutated.
import pygad
import numpy
function_inputs = [4,-2,3.5,5]
desired_output = 44
def fitness_func(solution, solution_idx):
output = numpy.sum(solution*function_inputs)
fitness = 1.0 / (numpy.abs(output - desired_output) + 0.000001)
return fitness
ga_instance = pygad.GA(num_generations=100,
sol_per_pop=5,
num_genes=4,
num_parents_mating=2,
fitness_func=fitness_func,
mutation_type="random",
mutation_probability=0.6)
ga_instance.run()
ga_instance.plot_result()
If the mutation probability is a very small value, like 0.001, then there’s little improvement in the solutions. After 100 generations, the fitness value of the best solution is 0.077, compared to 328.35 when the probability is 0.6 in the previous example.

Now we’ll move on to adaptive mutation, which adapts mutation probability according to the fitness/quality of the solution.
Adaptive mutation
Adaptive mutation was originally proposed in a paper titled Adaptive mutation in genetic algorithms. The paper summarized the pitfalls of using a constant mutation probability:
“The weak point of “classical” GAs is the total randomness of mutation, which is applied equally to all chromosomes, irrespective of their fitness. Thus a very good chromosome is equally likely to be disrupted by mutation as a bad one.
On the other hand, bad chromosomes are less likely to produce good ones through crossover, because of their lack of building blocks, until they remain unchanged. They would benefit the most from mutation and could be used to spread throughout the parameter space to increase the search thoroughness. So there are two conflicting needs in determining the best probability of mutation.”
The paper suggested the best way to work with constant mutation probability is selecting a low probability. Remember, low mutation probability across all solutions is good for high-quality solutions, but not for low-quality ones.
“Usually, a reasonable compromise in the case of a constant mutation is to keep the probability low to avoid disruption of good chromosomes, but this would prevent a high mutation rate of low-fitness chromosomes. Thus a constant probability of mutation would probably miss both goals and result in a slow improvement of the population.”
The paper suggested the use of adaptive mutation to solve the problems of constant mutation. Here is how adaptive mutation works:
- Calculate the average fitness value of the population (f_avg);
- For each chromosome, calculate its fitness value (f);
- If f<f_avg, then this solution is regarded as a low-quality solution and thus the mutation rate should be kept high because this would increase the quality of this solution;
- If f>f_avg, then this solution is regarded as a high-quality solution and thus the mutation rate should be kept low to avoid disrupting this high-quality solution.
In PyGAD, if f=f_avg, then the solution is high-quality.
The next figure summarizes the previous steps.

Next, we’ll build a Python example that uses adaptive mutation.
Adaptive mutation Python example
The adaptive mutation is supported in PyGAD starting from the 2.10.0 release. Make sure that you have at least PyGAD 2.10.0 installed:
pip install pygad==2.10.*
You can also check that PyGAD 2.10.0 is installed by printing the __version__ attribute as follows:
import pygad
print(pygad.__version__)
To use adaptive mutation in PyGAD, here’s what you need to change:
- Set the mutation_type argument to “adaptive”: mutation_type=”adaptive”;
- Assign a list/tuple/numpy.ndarray with exactly 2 values to the mutation_probability argument. This is an example: mutation_probability=[0.57, 0.32]. The first value 0.57 is the mutation probability for low-quality solutions. The second value 0.32 is the mutation rate for low-quality solutions. PyGAD expects the first value to be higher than the second value.
The next code uses adaptive mutation to solve the linear problem.
import pygad
import numpy
function_inputs = [4,-2,3.5,5]
desired_output = 44
def fitness_func(solution, solution_idx):
output = numpy.sum(solution*function_inputs)
fitness = 1.0 / (numpy.abs(output - desired_output) + 0.000001)
return fitness
ga_instance = pygad.GA(num_generations=100,
sol_per_pop=5,
num_genes=4,
num_parents_mating=2,
fitness_func=fitness_func,
mutation_type="adaptive",
mutation_probability=[0.6, 0.2])
ga_instance.run()
ga_instance.plot_result()
At one run, the fitness value found after the 100 generations is 974 and the 4 parameters w1 to w4 are as follows:
w1=2.73998896
w2=-2.7606857
23=-1.67836889,
w4=6.67838764
The next figure shows how the fitness value of the best solution changes with each generation.

Rather than using the mutation_probability argument, PyGAD supports using other arguments with adaptive mutation:
mutation_percent_genes
: The percentage of genes to mutate. For example, if the solution has 100 genes andmutation_percent_genes=20
, then 20 genes are mutated.mutation_num_genes
: Explicitly specifying the number of genes to mutate.
Like the mutation_probability argument, both mutation_percent_genes
and mutation_num_genes
accept a list/tuple/numpy.ndarray with exactly 2 elements when mutation_type=adaptive
.
Conclusion
That’s it! We’ve discussed the genetic algorithm and adaptive mutation, which selects the mutation probability based on the fitness of the solution.
We started with an overview of the genetic algorithm, and focused on the mutation operation. You saw the drawbacks of using a constant mutation probability, and how to solve them with adaptive mutation. I hope you enjoyed this tutorial. Check out the PyGAD library to implement both constant and adaptive mutation. Thanks for reading!
For more information
- Ahmed Fawzy Gad, Practical computer vision applications using deep learning with CNNs, Apress, 978-1484241660, 2018
- Libelli, S. Marsili, and P. Alba. “Adaptive mutation in genetic algorithms.” Soft computing 4.2 (2000): 76-80
- Dan Simon, Evolutionary Optimization Algorithms, Wiley, 978-0470937419, 2013
- Ahmed Gad, Introduction to Optimization with Genetic Algorithm, TowardsDataScience