In this module, we looked at the use of artificial intelligence within games and learnt of the different techniques that can be used to supplement AI behaviour. The main focus was on techniques that would allow the AI to learn and become better over time.
After the initial learning of the concepts behind these techniques, we were introduced to a program called TORCS (The Open Racing Car Simulator). This simulator had been adapted to become an environment in which an AI can learn how to make a better driver for the track.
We were then handed a modified version of this program (to make it slightly easier for us) and asked to use techniques covered to make a research paper on the techniques we used and how effective they were in this program.
To allow me to do my tests I created a Roulette Wheel selection method, a double point crossover method, and Elitist method. I also had to choose which of 22 parameters I would be allowing the AI to evolve through the course of the tests and set the mutation rate and scale.
My first experiment took a look at the difference between using RWS with crossover and without crossover.
My second experiment took a look at the use of RWS with Elitism and without Elitism.
My third and final experiment looked into the differences between using all mentioned techniques but changing the mutation rate to half and double.
To read a more in-depth documentation of the steps and processes covered in this project please see this document – Using Evolutionary Computation To Optimise Cars
As there is little to show in a video format, apart from the data return, there is no video for this project.