One of the things I like about being a hobby programmer is that you can work on projects of your own choosing without having to meet any arbitrary deadlines. It's even more fun when I can combine my love of programming with a topic which interests me, such as evolution. This is how a short lived experiment called "Neural Bots" got started.
Neural Bots was an experiment in evolving artificial intelligence. Using a type of simulated brain called a "neural network", I wondered if it would be possible to create some simple little on-screen critters, whose actions could be controlled by these artificial brains.
Then, I thought, what if these brains were passed from generation to generation, but sometimes with a slight error - a mutation? And what if these guys were competing for the same, limited resources which they all needed to survive? Could this "survival of the fittest" really work? Was Darwin really right!!?!?
Well, of course he was. About the concept of evolution, at least. But, still. It just sounded cool, OK?
So before I begin, I hope you will be patient enough to endure a few paragraphs of explanation. Or not.
Artificial neural networks: a (very) brief introduction.
A graph of a neural network extracted from a Neural Bot after a period of evolution. Blue nodes are input (sensor) neurons and red nodes are output (control) neurons. The numbers next to the connection lines indicate the strength of the "synaptic" connections. Click to super-size.
At the most basic level, an artificial neural network (ANN) consists of a soup of "neurons", which are connected, more or less haphazardly, to other neurons in the network. The connections are usually uni-directional (a signal can travel from A to B, but not the other way around) and have a "weight". A connection with a weight of 0.5 means the a signal will have half the strength when it reaches its destination as it had when it was transmitted.
When a neuron is activated, it combines the signals of all the incoming connections (usually using simple addition) and then performs a simple mathematical operation on the result. This operation is called the "transfer function" and is usually a non-linear function, with the sigmoid function being a favourite of ANN-nerds. This new signal is then propagated across all the outgoing connections, where the process repeats for all the down-stream neurons. The initial "catalyst" signal is provided by special "sensor" neurons, which are activated by external events (in this case, such as light hitting a photo sensor in an eye).
To make a neural network evolvable, all we need to do is mutate it in some way - selection will take care of the rest. Therefore, in Neural Bots, there was no "DNA" equivalent. All reproduction was through asexual mitosis. When a bot divided, the new bot would receive an identical copy of its parent's brain, with a slight chance of a mutation.
A mutation, in this case, could be a small alteration to one of the connection weights, or the addition or deletion of neurons or connections. In this sense, the structure of the neural network was the DNA. If the word "gene" wasn't vague enough as it is, Neural Bots took it to a whole new level.
Welcome to Botsville...
The final step was the creation of an artificial 2D world (and I did it in less than seven days) populated by the little fish-like bots, each with their own ANN "brain". The primary input to the brain comes from a pair of compound eyes, each consisting of 192 "photo-sensors" - 64 each for red, green and blue. Each of these photo-sensors had their own neuron in the brain which is activated depending on the amount of light of the appropriate colour hitting the sensor.
The primary output of the brain controls two thrusters - one on each side of the bot. Each thruster can fire independently of the other, forwards or in reverse, allowing the bots to control their movement through the 2D world.
The life-blood of the bots is energy. This is the mystical type of energy that new-agers often waffle about except, in this world, it's legit. They are born with a certain amount of it, and can gain more by eating food pellets. However, they loose energy depending on the amount of activity in their brain (hey, firing neurons ain't free!) and how much they are using their thrusters. This introduces an interesting balance between sitting still to save energy, and foraging for food.
The bots also have the ability to change their colour (also under brain control) by cycling along a "red-green-blue" window. To make things a little devious, when two bots collide with each other there is an energy transfer between them. The magnitude of the transfer depends on the force of the collision, and the direction depends on the colour of the two bots. As a general guide, red is superior to green, green superior to blue and blue superior to red, making a nice little "scissors-paper-rock" cycle.
Finally, to complete the life cycle, when a bot dies from low energy levels it gets turned into a small amount of food for other bots. It also gets replaced by a random spawning of one of the other, non-dead, bots. It is important to note here that this is the only non-random event in the entire simulation. Although the choice itself is random, a bot needs to be alive to have a chance. The longer it is alive, the more it ups its odds of repoduction. In other words, there is no artificial ranking of the bots according to some measure of "fitness". Stayin' alive is the only game in town.
OK - enough with the babble. Let's see what happened!
This is the first video I recorded of the bots in action. Skip the first minute if you don't want a short rehash of what I just wrote above.
These bots, like in all the simulations, started off with a completely random brain. This video was captured after about half an hour of evolution, and already you can see some non-random behaviour. The bots have pretty much learned to avoid the pink obstacles, which sap their energy. This actually usually happened very quickly at the start of any run, since the idiots who run straight into them very quickly find themselves on the short-list for a Darwin Award.
Perhaps the most interesting event occurs at 3:20 where, I swear to the Invisible Pink Unicorn, one of the bots is stalking another - waiting for it to die and then swooping in on his entrails. This was a behaviour that repeatedly came up in these first runs. The rule of thumb seemed to be "hang close to the other guys - but not too close". It was really quite freaky to see this swarming take place! It was almost a Dr. Frankenstein moment.
This second video perhaps isn't as exciting as the first. I decided to give the bots guns, to allow them to actively seek and destroy, but it never seemed to pan out. I now have a good idea why, but I will let you watch the video first:
There is an interesting moment at 2:40 when one of the bots tries to approach a food pellet (good) which is very close to a pink obstacle (bad). The little guy eventually succeeds when he turns to avoid the obstacle. Another food pellet, which didn't seem to be the source of his initial interest, happens to drift into his field of view and he chows down.
But as for the guns, after an initial period of loose-canon activity (I guess some guys are just trigger happy), they always seemed to end up calling a cease-fire. This seemed odd to me... How could peace spontaneously break out across the world? I now have a good idea, after considering some ideas about the selfish gene.
I hypothesised that, given the bots have no way of determining who is closely related to them and who is not, a "gene" which encourages firing at other bots will likely soon wipe itself out. In short, given the small population sizes of this simulation (usually 60-100 bots), there is a good chance that anyone you meet will be closely related to you, and thus likely to contain a copy of the "shoot first, ask questions later" gene if you do. It is, quite literally, a small world.
I decided to test this hypothesis by removing the colour-changing ability of the bots, and fixing them into red, green and blue "teams", with each team having its own ancestral line (ie: a red bot never gave birth to a blue or green one).
Although they still had a lot of difficulty aiming their guns correctly, there was no tendency towards a cease-fire in these new simulations. The downside was, the selection pressure regarding guns was so overwhelming that other selection pressures were washed out, resulting in a lack of anything interesting happening. Unfortunately, it was so disinteresting that I never bothered recording it. After all, this wasn't intended to be a serious experiment. I am pleased, however, that I managed "cure them" of their desire for peace.
The final video might be interesting to people who like watch stuff blow up. To make aiming their guns a little easier, I exchanged the bullets for pressure waves. Another interesting feature I was toying with at the time was the ability to "train" the bots manually by controlling one with the mouse and, literally, showing it what to do. It had mixed results:
And that was the end of my Neural Bot experiments. I should also comment that one of the pleasantly surprising, but somewhat irritating things about programming a simulation like this, is that the bots had this annoying tendency to find bugs in my code and exploit them to survive.
In one case, I noticed that bots were continually ramming each other and not dying. It turned out I had made a mistake in the collision energy transfer code, which resulted in the energy transfer being over 100%. By ramming each other in turn, they gradually built up their energy! The cheeky buggers...
In case anyone asks, I have tried dusting off the source code, but it has rotted quite badly since I last remember seeing it. Perhaps one day I can re-write this from scratch and release it publicly. It would sure make a pretty cool screen-saver! If there are any programmers here interested in collaborating on such an effort, send me a message!
Perhaps even, one day, these guys could evolve to a point where they can invent religion...