Oh and it might be better not to calculate LOS for every agent, as that's expensive. First try to see if it has any enemies close by. That can be optimized nicely by dividing the map into blocks, say 20x20 tiles each, and only check adjacent blocks for enemies.In the system I used, whenever a critter performs an action, it checks to see if it's in line of sight of nearby critters, by checking a straight line for obstacles. Critters who see it, tests a series of conditions/triggers, based on their current state as well as their general world view (eg. an attacking state may have a condition to like anyone inflicting harm on your own enemy; a man-eating monster may be aggravated at the mere sight of any human whatsoever). This removes the load of performing a complete FOV calculation for every critter every round. Each critter keeps a register of other critters it knows about, with info on how well liked they are (bias). I found this kind of system could be made very flexible without getting too complex. It doesn't account for items lying inconspicuously around, though. I never got as far in Squirm, but a possible solution might be to let items emit a kind of beacon every three rounds or something, alerting nearby critters of their existence.
I just want to have something that behaves in a reasonable and expected way...
If I start with a hack, then any added behaviour would be hack-upon-hack-upon-hack etc etc.
Name | % chance | Effect |
Heal | 100% - HP% | Gain 10HP |
Magic missile | 25% for each enemy in range | Create a Magic Missile targeting a random enemy in range |
Blink | 25% if low HP and in range of melee or ranged attack, otherwise 0% | Teleport nearby at random |
It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.
reaver, if you haven't read it already, check out this interesting article series (http://roguebasin.com/index.php?title=Roguelike_Intelligence) ...
I've tried state machines, personality stats, "needs" systems, and I think everything other than neural nets (please don't use neural nets). What I've found works best for me is a big old mess of if statements.
It's also useful to move logic from the creature class into the items, abilities, and spells. So creatures don't know how to use specific spells, but they ask all their spells if they should be used and how to use them.
... and the trick is? :)Have two arrays: distance and dirty, and a counter. Before every dijkstra run increase the counter. Reading and writing the distance is as follows:
setDistance(position, value) {
distance[position] = value;
dirty[position] = counter;
}
getDistance(position) {
if (dirty[position] < counter)
return infinity;
else
return distance[position];
}
One option on smaller, static maps is simply to precompute shortest paths between all nodes. For each node, compute a distance field and store it.
I just want to have something that behaves in a reasonable and expected way...I would say it's not what you want. When AI behaves in clear algorithmic way it can have a weakness players can figure out. Sometimes it can be a reasonable feature, but I would just call it boring.
The trick doubles the cost of each step in the pathfinding algorithmNo it doesn't, the dominating operation in A* is popping from the priority queue.
, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
Sorry, I thought you were talking about Dijkstra and/or in general about clearing any array that needs to be updated regularly. After all, you didn't mention A* at all and your wording:The trick doubles the cost of each step in the pathfinding algorithmNo it doesn't, the dominating operation in A* is popping from the priority queue.
It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.Made it sound like the trick was much more broadly applicable. In any case, I'd file it under micro-optimisation.
If you run Dijkstra on a small subregion of a large map, you can either do it on an array with equal size to the small subregion, or on the corresponding region of the whole array (with a rectangle describing which tiles the results are valid for). In either case, the amount of re-initialisation to be done is miniscule.Quote, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
If you run Dijkstra on a small subregion of a large map, you can either do it on an array with equal size to the small subregion, or on the corresponding region of the whole array (with a rectangle describing which tiles the results are valid for). In either case, the amount of re-initialisation to be done is miniscule.That's a reasonable approach and exactly what I was doing till a few days ago. The problem, in my case, was that once in a while the path needed to leave the subregion, and the algorithm couldn't find it. Increasing the margin helps, but the problem goes away completely if you just use the whole map with the trick to re-initialize it. That's the "pro" way of doing it, I believe.
Ok, my original post might have made it sound like I want everything and the kitchen sink, I might need to elaborate:What behaviours would you expect from the smart enemies? Do they really need to be smart, or can you simulate smartness by giving them advantages over other monsters and the player? What differences in behaviour do expect from the dumb enemies?
My idea is to group the creatures into two main categories of AI complexity:
- Stupid to moderately smart : all monsters
- NPC : characters that would play like the player
I'd like to have a good auto-play bot for the NPCs above, I don't care about perfection or true AI or whatever, but not visibly dumb.
I don't want either the monsters or NPC bots to cheat much, but not strictly prohibiting it.
I'd like to be able to run a number of those bots (a dozen or two) simultaneously in a level.
Levels could be large, and large levels with many smartsypants bots assume fast performance.
I don't think I'm asking the sky-and-stars, I'm just trying to think an extensible framework that can, for starters, satisfy *a few* of my requirements, and that I won't have to rewrite from scratch in a month.
What behaviours would you expect from the smart enemies? Do they really need to be smart, or can you simulate smartness by giving them advantages over other monsters and the player? What differences in behaviour do expect from the dumb enemies?
There's an opposition between individual-based AI routines that scale with the number of agents, and group/map-based AI routines that scale with the map size and number of different behaviours that must be supported.
Individual-focussed AI code may hinder your ability to add lots of dumb enemies, and group/map-focussed code may hinder the flexibility of the smart enemies. A dual approach may be twice as much work.
... it sounds a bit counter-intuitive though (splitting the AI/decision-making between the creature and its items), and spells/abilities are something that will probably come quite a bit later on :)