Evolutionary Programming: Ship Course Plotter
See the GitHub for this project here.
When I was first starting to hear about Machine Learning and AI, evolutionary programming always came up. Even though evolution is a rather slow optimization strategy, it’s a lot better than no optimization strategy, and I think it’s a great way to be introduced to the concept of optimization.
This project, in particular, uses evolution to optimize the starting parameters (velocity, angle) of a spaceship so that it can arrive at some goal planet (in a 2-dimensional space). There is a randomly generated field of planets between the ship and the goal planet, and the ship is subjected to planetary-type gravitation from all the planets. It quickly becomes apparent why it is not straightforward to calculate the optimal parameters. Without guessing and testing various candidate solutions or using some very complex math, it’s not feasible to calculate the optimal starting parameters.
The project was initially inspired by this scene from Ridley Scott’s The Martian, where Donald Glover’s character describes a plan to use a gravity assist (“gravity slingshot”) to get Matt Damon’s character home from Mars in time. They talk about using supercomputers to calculate the trajectories, and in the end, they use the gravity assist to save the day.
So, I got to work making a program to optimize a ship’s path through a randomly generated 2-dimensional solar system.
The basic layout is pretty simple. Each “plan” for a ship course has two variables: the angle at which the ship takes off and the velocity with which it takes off. Every generation, a new “plan” is generated based on the winner of the previous generation. The new plan is a “mutated” version, meaning that its parameters are altered by some random amount. The new plan and the old plan “compete”, and whichever one ends up closest to the goal planet “survives” to the next generation and the cycle starts again.
Although the approach certainly works on this scale, it is highly inefficient. Unlike more advanced optimization techniques, evolution blindly tries slightly different alternatives over and over until it finds a slightly better one. This results in evolution taking far longer, and depending on the problem, have a real chance of getting stuck in one place and never reaching the global optimum.
This evolutionary approach is by no means groundbreaking. It seems to have had its heyday in the 1990’s, but today gradient descent is far more popular given its vastly higher efficiency. However, as an exercise in understanding gravitational mechanics, the basics of optimization, and in having fun, this was quite enlightening. As well, it is still regarded as a valid optimization method for certain problems.