I took the plunge and added in the neuron map to the monsters last night. At the moment though, its not used in the code. I'll connect it to the AI probably on Sunday or Monday after the weekend. So now they have the ability to learn unsupervised, but dont.
The 2D neuron map will be used as a weight matrix together with dynamic rules using expression trees. When it is necessary to choose a rule behaviour, it will use the neuron map as a part of the decision making.
Scenario:
If a player always seems to attack early on without magic for example, the monster race will have learnt this, and can decide on how to attack instead of just an uneducated rules set.
For the future:
I'll eventually put the AI in the cloud, and decouple it from the server and client, using it as a plugin. As players throughout the world play SurviveRL, it will train the collective cloud AI, so that all players will notice how the monsters have adapted based on the global player base.
This will be about 1 months work to do, and I'll do that in a few months from now, but is where I want to take the AI in SurviveRL.
It was a main reason why I chose an Ascii terminal for now, to concentrate on the mechanics.
Again, any comments on this appreciated. I'm capable of coding all of this quickly, but don't want to waste time if it seems overkill or not fun to play.