Temple of The Roguelike Forums

Development => Programming => Topic started by: reaver on January 26, 2014, 03:13:45 PM

Title: AI for dynamic environments
Post by: reaver on January 26, 2014, 03:13:45 PM
Hi all,

First post here, thinking and developing roguelike-related things lately. I'm currently trying to tackle the subject of a goal-driven AI that plays "fair" (unexplored bits are unknown, fog of war applies as well). Imagine the below scenario:

I spawn 2 agents (A and B) in a dungeon.
A would prioritize fleeing from B and then find the exit of the level, and B would prioritize chasing A and would explore the dungeon till he finds A.
The AIs explore the level as they go, and they obey fog of war rules : they only have current state of what's in their LoS -- the rest, even if explored in the past, remains in the state that it was in the past. The latter bit describes the "dynamic" aspect in my post subject: the map can change without the agent knowing that, unless it explores the changed area again.

Now, how would one go implement this in a scalable way? The above scenario has 2 agents, but ideally I'd like to have 50 of those or more. My current thoughts (inefficient) are roughly described below:
-------------------
At each turn, each AI scans their LoS.
Every point that's walkable and has unexplored neighbours, is an exploration goal.
Points that represent enemies or the exit, represent goals as well, with negative and positive weights respectively ( avoid/goto)
All these goals are weighted by distance.
I pick the 'best' goal out of these, run A*, and get the first point.
rinse and repeat.
-------------------
I've read a bit about scent maps, djiskstra maps (which are distance fields really), but I'm not sure if these scale for many many agents

So, any ideas?
Title: Re: AI for dynamic environments
Post by: Quendus on January 26, 2014, 03:43:54 PM
Remember that in most computer games the primary purpose of AI programming is to provide an entertaining experience for the human player and support their suspension of disbelief. Highly competitive computer players of games like chess and draughts are the exception rather than the rule.

And regarding efficiency, the normal method is to program a system that produces the behaviour you want, and if it's too slow or memory hungry then you can profile and find ways to improve performance.

Ignoring those points and considering the problem as-is, I think a couple of clarifications are necessary:
Remember that you can check if two agents have LoS contact without iterating over the whole field of vision.
Title: Re: AI for dynamic environments
Post by: reaver on January 26, 2014, 03:58:08 PM
Wrt suspension of disbelief vs actual simulation, I just want to have something that behaves in a reasonable and expected way... If I start with a hack, then any added behaviour would be hack-upon-hack-upon-hack etc etc. You think that's a bad/wrong idea? I'm an experienced programmer, but not gameplay programmer, so any hints/experience on that is welcome :)

For map size, I guess from 80x80 to 300x300. LoS range about 7-15 (euclidean distance)
For the relationships, I'm quite unsure yet because I have not given it too much thought. I guess, besides the player, a number of teams ( 10-15) of 5-10 creatures each (most would be monster mobs, some would be NPCs). I guess a "telepathic link" between creatures of the same team won't be a problem.
Title: Re: AI for dynamic environments
Post by: miki151 on January 26, 2014, 07:22:42 PM
If you're going to have many, many agents, than fleeing is likely to be the most expensive calculation. In KeeperRL I have a pretty large map (600x600), with wildlife, villages, bandits, etc. At any moment a group of agents might start chasing another group. I use dijkstra maps, as described on roguebasin, but only up to 6-7 tiles of radius, beyond that it's a simple "run in the opposite direction" algorithm, for efficiency reasons. I might even drop the dijkstra map algorithm in situations where the player is not involved.

It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run. Another trivial thing is to remember the path for the next turns and only recalculate it if the actor can't take a move.

Oh and it might be better not to calculate LOS for every agent, as that's expensive. First try to see if it has any enemies close by. That can be optimized nicely by dividing the map into blocks, say 20x20 tiles each, and only check adjacent blocks for enemies.
Title: Re: AI for dynamic environments
Post by: AgingMinotaur on January 26, 2014, 09:04:44 PM
reaver, if you haven't read it already, check out this interesting article series (http://roguebasin.com/index.php?title=Roguelike_Intelligence). I particularly like the state machine AI idea, and used that model in my own abandoned game Squirm, with data for different AI plugins stored in a file (kits/brains.yug in the source). Each state takes a quarry (called 'q'), which is the focus of attention of the active critter; when it's that critters turn, the state runs through a series of conditions. For instance, for a state like "zombi attack", the conditions and their triggers might be: * If 'q' is dead, mission accomplished, return to previous state, * if 'q' is in range to be attacked, do that, * if I can approach 'q' to come into attacking range, do that, * if 'q' is not in sight, explore to find 'q' (possibly heading for last known position of 'q', or scent, etc.), * if all else fails, wander aimlessly.

Some conditions trigger actions, other just change the state of a critter. This basic pattern can be scaled for more complex situations, including things like drinking potions, zapping wands, assigning quests, and going to the pub for a drink.

Oh and it might be better not to calculate LOS for every agent, as that's expensive. First try to see if it has any enemies close by. That can be optimized nicely by dividing the map into blocks, say 20x20 tiles each, and only check adjacent blocks for enemies.
In the system I used, whenever a critter performs an action, it checks to see if it's in line of sight of nearby critters, by checking a straight line for obstacles. Critters who see it, tests a series of conditions/triggers, based on their current state as well as their general world view (eg. an attacking state may have a condition to like anyone inflicting harm on your own enemy; a man-eating monster may be aggravated at the mere sight of any human whatsoever). This removes the load of performing a complete FOV calculation for every critter every round. Each critter keeps a register of other critters it knows about, with info on how well liked they are (bias). I found this kind of system could be made very flexible without getting too complex. It doesn't account for items lying inconspicuously around, though. I never got as far in Squirm, but a possible solution might be to let items emit a kind of beacon every three rounds or something, alerting nearby critters of their existence.

Just one of many plausible solutions.

As always,
Minotauros
Title: Re: AI for dynamic environments
Post by: Krice on January 26, 2014, 11:03:15 PM
I just want to have something that behaves in a reasonable and expected way...

I would say it's not what you want. When AI behaves in clear algorithmic way it can have a weakness players can figure out. Sometimes it can be a reasonable feature, but I would just call it boring.
Title: Re: AI for dynamic environments
Post by: Trystan on January 27, 2014, 06:44:54 AM
If I start with a hack, then any added behaviour would be hack-upon-hack-upon-hack etc etc.

I would suggest trying a hack-upon-hack-upon-hack approach.

I've tried state machines, personality stats, "needs" systems, and I think everything other than neural nets (please don't use neural nets). What I've found works best for me is a big old mess of if statements. It's the easiest to understand and tweak, gets good behavior with little code, is very flexible, and - if you use a bit of randomness - is a good mix of intelligent and unpredictable. If you find that you have a lot of duplication and obvious structure, then refactor. But don't use neural nets - it's a ton of work and I've never seen any remotely interesting AI come of it.

It's also useful to move logic from the creature class into the items, abilities, and spells. So creatures don't know how to use specific spells, but they ask all their spells if they should be used and how to use them. For example, you could have spells like these:
Name% chanceEffect
Heal100% - HP%Gain 10HP
Magic missile25% for each enemy in rangeCreate a Magic Missile targeting a random enemy in range
Blink25% if low HP and in range of melee or ranged attack, otherwise 0%Teleport nearby at random
Spellcasters go through their spells and either cast the first one they meet the criteria for or move on to the next step in the ai. I've got some code examples of how I did this for a 7DRL if you're interested.
Title: Re: AI for dynamic environments
Post by: reaver on January 27, 2014, 09:58:32 AM
Thanks for all the replies guys.

It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.

... and the trick is? :)

reaver, if you haven't read it already, check out this interesting article series (http://roguebasin.com/index.php?title=Roguelike_Intelligence) ...

Thanks for the refs and suggestions Minotaure, I'll check the articles out. I'm not sure I got the FoV interaction that you described, but I guess that will come later on the development.

I've tried state machines, personality stats, "needs" systems, and I think everything other than neural nets (please don't use neural nets). What I've found works best for me is a big old mess of if statements.

I was thinking state machines actually for the higher level AI, but I have to read the AI articles on RogueBasin first to get some more info. The problem I described is a very low on AI ... high-levelness, but 'dangerous' in terms of computational costs if done carelessly.

It's also useful to move logic from the creature class into the items, abilities, and spells. So creatures don't know how to use specific spells, but they ask all their spells if they should be used and how to use them.

Thanks for the suggestion, it sounds a bit counter-intuitive though (splitting the AI/decision-making between the creature and its items), and spells/abilities are something that will probably come quite a bit later on :)
Title: Re: AI for dynamic environments
Post by: miki151 on January 27, 2014, 10:35:39 AM
... and the trick is? :)
Have two arrays: distance and dirty, and a counter. Before every dijkstra run increase the counter. Reading and writing the distance is as follows:

Code: [Select]
setDistance(position, value) {
  distance[position] = value;
  dirty[position] = counter;
}

getDistance(position) {
  if (dirty[position] < counter)
   return infinity;
  else
    return distance[position];
}

dirty simply tells you if the value is left over from a previous run or if it's up to date. So increasing the counter is like initializing the array.
Title: Re: AI for dynamic environments
Post by: Quendus on January 27, 2014, 11:24:54 AM
The trick doubles the cost of each step in the pathfinding algorithm, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.

Cacheing behaviour on modern systems could also make the difference in performance difficult to predict - it's quite possible for the simpler solution to be faster. If you really need to maximise the number of Dijkstra scans you perform, it's better to profile the code and find what's really taking time. The results are often surprising.

One option on smaller, static maps is simply to precompute shortest paths between all nodes. For each node, compute a distance field and store it. You can find shortest paths implicitly by checking which of the adjacent nodes has the shortest distance to the target. This is relatively quick to compute, but takes a lot of space. I wouldn't recommend it on a map with more than 10,000 walkable nodes, and it's not compatible with mechanics like tunnelling and dynamic obstacles- except on even smaller maps.
Title: Re: AI for dynamic environments
Post by: reaver on January 27, 2014, 11:45:19 AM
I would agree with Quendus and go with profiling-on-demand

One option on smaller, static maps is simply to precompute shortest paths between all nodes. For each node, compute a distance field and store it.

I've been thinking about this, pre-calculating shortest paths in a dungeon, for each area (room/corridor), the paths from each connection (door) to each other connection. This would work nicely as an optimisation, but tunnelling breaks this.

I just want to have something that behaves in a reasonable and expected way...
I would say it's not what you want. When AI behaves in clear algorithmic way it can have a weakness players can figure out. Sometimes it can be a reasonable feature, but I would just call it boring.

Miscommunication here... behaving in a reasonable and expected way towards *me*, the developer, so I can understand what's going on, modify, vary with rand(), optimize, etc etc. I think the hack-on-hack is the best-first-search of programming, it's awesome for prototyping but when things get complicated, good luck trying to make sense, retrace and rewrite...
Title: Re: AI for dynamic environments
Post by: Quendus on January 27, 2014, 12:06:29 PM
Reading over the original post, it seems you want the following traits in your AI actors:
and presumably:
I'm not going to make broad statements about whether that's possible or not, but I think it might be a good idea to work out which of these are real priorities and which can be relaxed or dropped.

Regarding distance field precomputation and compatability with tunnelling: if map changes are rare events (for instance, only due to player action and not due to monsters who eat walls), then it's possible to update the map gradually, recomputing distance fields in a background thread, or between turns in groups of 10-50. You can swap the new distance field in when it's complete.
Title: Re: AI for dynamic environments
Post by: miki151 on January 27, 2014, 12:26:56 PM
The trick doubles the cost of each step in the pathfinding algorithm
No it doesn't, the dominating operation in A* is popping from the priority queue.

Quote
, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.
The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
Title: Re: AI for dynamic environments
Post by: Quendus on January 27, 2014, 12:36:57 PM
The trick doubles the cost of each step in the pathfinding algorithm
No it doesn't, the dominating operation in A* is popping from the priority queue.
Sorry, I thought you were talking about Dijkstra and/or in general about clearing any array that needs to be updated regularly. After all, you didn't mention A* at all and your wording:
It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.
Made it sound like the trick was much more broadly applicable. In any case, I'd file it under micro-optimisation.


Quote
Quote
, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.
The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
If you run Dijkstra on a small subregion of a large map, you can either do it on an array with equal size to the small subregion, or on the corresponding region of the whole array (with a rectangle describing which tiles the results are valid for). In either case, the amount of re-initialisation to be done is miniscule.
Title: Re: AI for dynamic environments
Post by: reaver on January 27, 2014, 12:46:21 PM
Ok, my original post might have made it sound like I want everything and the kitchen sink, I might need to elaborate:

My idea is to group the creatures into two main categories of AI complexity:

I'd like to have a good auto-play bot for the NPCs above, I don't care about perfection or true AI or whatever, but not visibly dumb.
I don't want either the monsters or NPC bots to cheat much, but not strictly prohibiting it.
I'd like to be able to run a number of those bots (a dozen or two) simultaneously in a level.
Levels could be large, and large levels with many smartsypants bots assume fast performance.

I don't think I'm asking the sky-and-stars, I'm just trying to think an extensible framework that can, for starters, satisfy *a few* of my requirements, and that I won't have to rewrite from scratch in a month.
Title: Re: AI for dynamic environments
Post by: miki151 on January 27, 2014, 12:47:09 PM
If you run Dijkstra on a small subregion of a large map, you can either do it on an array with equal size to the small subregion, or on the corresponding region of the whole array (with a rectangle describing which tiles the results are valid for). In either case, the amount of re-initialisation to be done is miniscule.
That's a reasonable approach and exactly what I was doing till a few days ago. The problem, in my case, was that once in a while the path needed to leave the subregion, and the algorithm couldn't find it. Increasing the margin helps, but the problem goes away completely if you just use the whole map with the trick to re-initialize it. That's the "pro" way of doing it, I believe.

Also, the trick is widely used in other applications, it's good to remember about it. It's not a micro-optimization, it reduces the time complexity of some algorithms.
Title: Re: AI for dynamic environments
Post by: Quendus on January 27, 2014, 02:00:05 PM
Yes, it's a good optimisation in cases where there's a high ratio between the number of tiles potentially altered and the average number of tiles actually altered in each run.
Ok, my original post might have made it sound like I want everything and the kitchen sink, I might need to elaborate:

My idea is to group the creatures into two main categories of AI complexity:
  • Stupid to moderately smart : all monsters
  • NPC : characters that would play like the player

I'd like to have a good auto-play bot for the NPCs above, I don't care about perfection or true AI or whatever, but not visibly dumb.
I don't want either the monsters or NPC bots to cheat much, but not strictly prohibiting it.
I'd like to be able to run a number of those bots (a dozen or two) simultaneously in a level.
Levels could be large, and large levels with many smartsypants bots assume fast performance.

I don't think I'm asking the sky-and-stars, I'm just trying to think an extensible framework that can, for starters, satisfy *a few* of my requirements, and that I won't have to rewrite from scratch in a month.
What behaviours would you expect from the smart enemies? Do they really need to be smart, or can you simulate smartness by giving them advantages over other monsters and the player? What differences in behaviour do expect from the dumb enemies?

There's an opposition between individual-based AI routines that scale with the number of agents, and group/map-based AI routines that scale with the map size and number of different behaviours that must be supported.
Individual-focussed AI code may hinder your ability to add lots of dumb enemies, and group/map-focussed code may hinder the flexibility of the smart enemies. A dual approach may be twice as much work.
Title: Re: AI for dynamic environments
Post by: reaver on January 27, 2014, 03:17:56 PM
What behaviours would you expect from the smart enemies? Do they really need to be smart, or can you simulate smartness by giving them advantages over other monsters and the player? What differences in behaviour do expect from the dumb enemies?

There's an opposition between individual-based AI routines that scale with the number of agents, and group/map-based AI routines that scale with the map size and number of different behaviours that must be supported.
Individual-focussed AI code may hinder your ability to add lots of dumb enemies, and group/map-focussed code may hinder the flexibility of the smart enemies. A dual approach may be twice as much work.

In my (ideal) view, I want a system to allow for a hierarchy of needs.


(Obviously I'm not gonna think how to do party-formation now, just the dumb AI, I just want something sensibly designed that will keep whole-AI-system-rewrites to the minimum)

When you say about scaling, do they scale well or badly? I have no idea of how different they can be, and I've never done group AI. I bet I'll be using that though, as all dumb and many smarter/smartest creatures would probably exhibit groupthink in some form or another. The smarter the creature, the more the individual code I expect it to have, alongside higher-level group code.

By the way, does anybody use D*-lite around here?
Title: Re: AI for dynamic environments
Post by: miki151 on January 27, 2014, 04:18:13 PM
I'd like to hear about the group/map-based AI, it sounds interesting.
Title: Re: AI for dynamic environments
Post by: Trystan on January 27, 2014, 04:23:02 PM
... it sounds a bit counter-intuitive though (splitting the AI/decision-making between the creature and its items), and spells/abilities are something that will probably come quite a bit later on :)

I made an NPC that can occasionally beat my 7DRL from last year. You can see it in action at http://trystans.blogspot.com/2013/09/pugnacious-wizards-2-version-10.html. The AI only uses spells since there are no other items but you could easily add items and have them use the same system. The main AI loop is 24 lines: https://github.com/trystan/PugnaciousWizards2/blob/master/src/Hero.as#L28. It's not perfect, but it's interesting and I'm pretty proud of it.

As a general rule, moving logic out of the creature class has worked well for me.

Has anyone seen a good example of group-based AI that worked well?
Title: Re: AI for dynamic environments
Post by: Quendus on January 27, 2014, 05:41:08 PM
Joshua Day has written a lot on this kind of programming. It's essentially an extension of the scent map/distance field idea. One map computed for the whole level can provide any number of monsters with a specific behaviour (like chasing the player, chasing members of a specific group of agents, fleeing from the same group, moving towards useful items and unexplored areas) essentially for free.This becomes inefficient on very large maps and when the number of different behaviours that need individual treatment is large.

If every agent has different relationships with others, then it's impractical. If they're divided into teams, then the performance bound would be on the number of teams times the size of the map. An agent can produce different higher-level behaviours by combining information from more than one map, recomputed once per turn, and avoid having to do any pathfinding of its own unless it has very unusual requirements.

http://paleoludic.com/writing/whitespace-reasoning/
http://paleoludic.com/writing/a-pathfinding-primer/
http://paleoludic.com/writing/engineering-pds/