24/7 writing help on your phone

Essay,
Pages 9 (2171 words)

Views

210

Save to my list

Remove from my list

The N-Queens problem is a classical AI problem. Its name is derived from the allowed moves for the queen piece in chess. Queens are allowed to move horizontally, vertically, or diagonally, backward and forward, with the only restriction being that they can move in only one direction at a time. A queen that can reach another piece in one move captures it.

The N-Queens problem is based on the notion of trying to place N queens on an N x N grid, such that no queen will be able to capture any other queen.

The N-queens problem is typical of many combinatorial problems, in that it is simple to state and relatively easy to solve for small N, but becomes difficult with a large N. There are few ways to solve the N-queens problem. Some of them are trying all the permutations, using backtracking methods, using reinforcement learning methods, and etc. In this project, genetic algorithm will be used to solve this problem by using GAlib package.

Genetic Algorithms are adaptive methods which may be used to solve search and optimization problems. They are based on the genetic processes of biological organisms. Over many generations, natural populations evolve according to the principles of natural selection and “survival of the fittest”. By mimicking this process, genetic algorithms are able to “evolve” solutions to real world problems, if they have been suitably encoded.

Genetic Algorithms use a direct analogy of natural behavior. They work with a population of “individuals”, each representing a possible solution to a given problem.

Each individual is assigned a “fitness score” according to how good a solution to the problem it is. The highly fit individuals are given opportunities to “reproduce”, by “cross breeding” with other individuals in the population. This produces new individuals known as “offsprings”, which share some features taken from each “parent”. The least fit members of the population are less likely to get selected for reproduction, and so will “die out”.

A whole new population of possible solutions is thus produced by selecting the best individuals from the current “generation”, and mating them to produce a new set of individuals. This new generation contains a higher proportion of the characteristics possessed by the good members of the previous generation. In this way, over many generations, good characteristics are spread throughout the population, being mixed and exchanged with other good characteristics as they go. By favouring the mating of the more fit individuals, the most promising areas of the search space are explored. If the genetic algorithm has been designed well, the population will converge to an optimal solution to the problem.

The power of genetic algorithms come from the fact that the technique is robust, and can deal successfully with a wide range of problem areas, including those which are difficult for other methods to solve. Genetic algorithms are not guaranteed to find the global optimum solution to a problem, but they are generally good at finding “acceptably good” solutions to problems “acceptably quick”. Where specialized techniques exist for solving particular problems, they are likely to out-perform genetic algorithms in both speed and accuracy of the final result. The main ground for genetic algorithms, then, is in difficult areas where no such techniques exist. Even where existing techniques work well, improvements have been made by hybridizing them with a genetic algorithm.

Before a genetic algorithm can be run, a suitable representation for the problem must be devised. A fitness function also required to assign a figure of merit to each coded solution. As a summary, the procedures of genetic algorithms are as below:

- generate initial population
- compute fitness of each individual
- select individuals for mating
- mate individuals to produce offspring
- mutate offspring
- insert offspring into population
- if stopping criteria satisfied, go to “8”, else go to “3”
- finish

Figure 1: The procedures of genetic algorithms

First of all, we need to choose a genetic algorithm from GAlib package. There are four kinds of genetic algorithm available. They are GADemeGA, GAIncrementalGA, GASimpleGA and GASteadyStateGA. It is alright to choose anyone of them. To make things simple, GASimpleGA was chosen.

After that, we need to define a representation for the N-queens problem. There are two ways to do that. The usual way is to represent it as a 2-dimensional binary array as shown at figure 2b. We are sure that no 2 queens will be in the same row and no 2 queens in the same column. So, for optimal solution, we can just represent the chessboard as a 1-dimensional integer array as shown at figure 2c. The number “X” in the array will represent there is a queen at row “X”.

For 2-dimensional binary array chessboard, there are in total (NxN)!/(N!x(N(N-1))!) = 64! / (8!56!) = 4426165368 permutations. However, for a 1-dimensional integer array chessboard, there is only N! = 40320 permutations. Since the total permutations for 1-dimensional integer array are much smaller than 2-dimensional binary array, 1-dimensional integer array which is GA1DArrayGenome in GAlib package is used in this project.

Figure 2a: Original chessboard Figure 2b: 2-dimensional binary array chessboard

5 0 4 1 7 2 6 3

Figure 2c: 1-dimensional integer array chessboard

According to figure 1, the first step is to generate the initial population. Since there isn’t a default initialization function in GA1DArrayGenome, we have to create one. Firstly, an array with values {0, 1, 2,…, N-2, N-1} is created. Then, two positions are randomly chosen and their values are swapped. The total number of swapping is set to N. After that, we should get a random individual. This method is used because the total running time is just O(n).

After generating population, the next step is to compute the fitness of each individual. There are five scaling methods, and they are, GANoScaling, GALinearScaling, GASigmaTruncationScaling, GAPowerLawScaling and GASharing. For the objective function, each individual is given an initial value of NxN. After that, the number of queens which the queen in first column can attack is subtracted from the initial value. The same is done for the queen in the second, third and until the Nth column. The worst case is when all queens are in diagonal. In such a case, the total number that needs to be subtracted is N(N-1). Since the initial value is bigger than this, negative number will not be taken into consideration for objective scores.

The objective score will be the final number divided by 2. So, GALinearScaling which is the default scaling in GASimpleGA is used instead of GASigmaTruncationScaling. GANoScaling too should not be used as the objective scores for individuals are quite big and close to each other. GAPowerLawScaling is also not considered because it will return a very big number for each individual after scaling. GASharing is too complicated and is not suitable to be implemented in this simple problem. So, GALinearScaling which does not allow negative objective scores is used.

The next step is to select individuals for mating. There are six different selecting methods, and they are GARankSelector, GARouletteWheelSelector, GATournamentSelector, GADSSelector, GASRSSelector and GAUniformSelector. GARankSelector should not be used because it always picks the best member of the population and the individuals with low fitness score will not be chosen. GAUniformSelector is not suitable because every individual will have the same probability of being chosen. Theoretically, individuals with high fitness scores should have higher probability to being chosen. GARouletteWheelSelector, GATournamentSelector, GADSSelector and GASRSSelector all are suitable but GARouletteWheelSelector was chosen because it is the default selection method of GASimpleGA.

Genetic algorithm will mates individuals to produce offsprings. There are 7 crossover methods in the GA1DArrayGenome, and they are, UniformCrossover, EvenOddCrossover, OnePointCrossover, TwoPointCrossover, PartialMatchCrossover, OrderCrossover and CycleCrossover. The first four crossover methods are not suitable because they are simple crossovers and will produce offsprings with duplicate numbers. We should use combinatorial crossover for this problem and hence, PartialMatchCrossover is chosen. I have written another two more crossover methods, named OrderBasedCrossover and PositionBasedCrossover, so that their efficiency can be compared.

The next step is to mutate the offsprings. We can only use mutation methods which will not produce offspring with duplicate numbers. Only the SwapMutator method in GA1DArrayGenome can be used in this case. Since the mutation method meets the requirement, it is used in this problem.

Each generation the GASimpleGA creates an entirely new population of individuals by selecting from the previous population. Elitism is optional. By default, elitism is on in GASimpleGA, meaning that the genetic algorithm will copy the best individual from the previous population into the current population if no individual in the current population is any better.

There are three methods to determine whether or not a genetic algorithm is finished. They are TerminateUponGeneration, TerminateUponConvergence and TerminateUponPopConvergence. TerminateUponGeneration is used here because it is customary to measure the efficacy of a genetic algorithm by counting evaluation, like population size x generations. So, we need to get the population of last generation to measure the efficacy. This can’t be done using the other two methods.

To make things simple, I decide to experiment with only four parameters. They are probability of crossover, probability of mutation, population sizes and number of generations. It will be too easy to solve the N-Queens problem and hard to see the effects of each parameter, if N has a small value. However, if N is large, then it will be too difficult to solve this and it will take too long to complete the experiments. So, an N of medium magnitude, which is 20 will be used in this experiment to see the effect of each parameter.

First of all, the probability of crossover and mutation is set to 0.5 and 0.1 respectively. The crossover method which used in this experiment is PartialMatchCrossover which is in GA1DArrayGenemo. Only the best objective score in last generation is calculated. Table 1 shows the average objective scores with different combination of population size and number of generations. The number differences between adjacent column and row become bigger and bigger because it is believed that objective scores will change a lot at the beginning and will then slowly converge to optimal solution. The objective score for optimal solution = NxN/2 = 20×20/2 = 200.

Table 1: Average objective scores with probability of crossover and mutation are 0.5 and 0.1

From the table 1, we can see that the objective scores don’t change much when we increase the population size. It also shows that objective scores are hard to converge to 200 with combination of small number of generations and large number of population size. On the other hand, it can be converged to 200 with combination of small number of population size and large number of generations. From the last column of table 1, we can see that the objective scores almost the same with different population sizes. So, it is preferable to use the combination of small population size and large number of generations in this problem. Population size of 25 and number of generations of 1000 is chosen because it gives the best objective scores.

Table 2 shows the average objective scores with different combination of probability of crossover and mutation. We can see that we will get bad objective scores except when the probability of mutation is equals to 0.1. So, we should only set the probability of mutation equals to 0.1. The next step is to choose the probability of crossover. Probability of crossover of 0.1 and 0.3 both give the best objective scores. Probability of crossover of 0.1 is chosen because the difference value of its adjacent is less than the probability of crossover of 0.3.

Table 2: Average objective scores with population size of 25 and number of generations of 1000

After getting the probability of crossover of 0.1 and probability of mutation of 0.1, we should now try some experiments with population size around 25 and number of generations around 1000 in order to get the most effective run conditions.

From table 3, we can see that we can always get the optimal solution when the numbers of generations become bigger regardless of the population sizes. So, it is preferable to choose small number of population sizes. Besides, we can know that the higher the number of generations, the higher the probability that we can get the optimal solution.

Table 3: Average objective scores with probability of crossover and mutation are 0.1 and 0.1

As a conclusion, in order to get the most effective run conditions, we should set the probability of crossover in the range of 0.0 to 0.3, probability of mutation to 0.1, population sizes to range from 5 to 35, and the number of generations to be greater than 750(depends to how high is the probability that we want to get the optimal solution).

The probability of crossover and mutation is set to 0.5 and 0.1 respectively again. The crossover method which used in this experiment is Order-Based Crossover which I have written myself. From the previous experiments, we know that the population sizes need not necessarily to be too large and the number of generations should be larger. But that may not be the case here, for order-based crossover. So, the same experiments for order-based crossover are done here.

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment