Author Topic: AI for dynamic environments  (Read 20815 times)

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
AI for dynamic environments
« on: January 26, 2014, 03:13:45 PM »
Hi all,

First post here, thinking and developing roguelike-related things lately. I'm currently trying to tackle the subject of a goal-driven AI that plays "fair" (unexplored bits are unknown, fog of war applies as well). Imagine the below scenario:

I spawn 2 agents (A and B) in a dungeon.
A would prioritize fleeing from B and then find the exit of the level, and B would prioritize chasing A and would explore the dungeon till he finds A.
The AIs explore the level as they go, and they obey fog of war rules : they only have current state of what's in their LoS -- the rest, even if explored in the past, remains in the state that it was in the past. The latter bit describes the "dynamic" aspect in my post subject: the map can change without the agent knowing that, unless it explores the changed area again.

Now, how would one go implement this in a scalable way? The above scenario has 2 agents, but ideally I'd like to have 50 of those or more. My current thoughts (inefficient) are roughly described below:
-------------------
At each turn, each AI scans their LoS.
Every point that's walkable and has unexplored neighbours, is an exploration goal.
Points that represent enemies or the exit, represent goals as well, with negative and positive weights respectively ( avoid/goto)
All these goals are weighted by distance.
I pick the 'best' goal out of these, run A*, and get the first point.
rinse and repeat.
-------------------
I've read a bit about scent maps, djiskstra maps (which are distance fields really), but I'm not sure if these scale for many many agents

So, any ideas?
« Last Edit: January 26, 2014, 03:33:00 PM by reaver »

Quendus

  • Rogueliker
  • ***
  • Posts: 447
  • Karma: +0/-0
  • $@ \in \{1,W\} \times \{1,H\}$
    • View Profile
    • Klein Roguelikes
Re: AI for dynamic environments
« Reply #1 on: January 26, 2014, 03:43:54 PM »
Remember that in most computer games the primary purpose of AI programming is to provide an entertaining experience for the human player and support their suspension of disbelief. Highly competitive computer players of games like chess and draughts are the exception rather than the rule.

And regarding efficiency, the normal method is to program a system that produces the behaviour you want, and if it's too slow or memory hungry then you can profile and find ways to improve performance.

Ignoring those points and considering the problem as-is, I think a couple of clarifications are necessary:
  • What range of map size and LoS range do you want to use?
  • How complex is the network of relationships between agents? (For instance, are they split into a small number of teams, or do they each flee from one person and chase one other, or are there no limits on the network?)
Remember that you can check if two agents have LoS contact without iterating over the whole field of vision.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #2 on: January 26, 2014, 03:58:08 PM »
Wrt suspension of disbelief vs actual simulation, I just want to have something that behaves in a reasonable and expected way... If I start with a hack, then any added behaviour would be hack-upon-hack-upon-hack etc etc. You think that's a bad/wrong idea? I'm an experienced programmer, but not gameplay programmer, so any hints/experience on that is welcome :)

For map size, I guess from 80x80 to 300x300. LoS range about 7-15 (euclidean distance)
For the relationships, I'm quite unsure yet because I have not given it too much thought. I guess, besides the player, a number of teams ( 10-15) of 5-10 creatures each (most would be monster mobs, some would be NPCs). I guess a "telepathic link" between creatures of the same team won't be a problem.

miki151

  • Rogueliker
  • ***
  • Posts: 264
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #3 on: January 26, 2014, 07:22:42 PM »
If you're going to have many, many agents, than fleeing is likely to be the most expensive calculation. In KeeperRL I have a pretty large map (600x600), with wildlife, villages, bandits, etc. At any moment a group of agents might start chasing another group. I use dijkstra maps, as described on roguebasin, but only up to 6-7 tiles of radius, beyond that it's a simple "run in the opposite direction" algorithm, for efficiency reasons. I might even drop the dijkstra map algorithm in situations where the player is not involved.

It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run. Another trivial thing is to remember the path for the next turns and only recalculate it if the actor can't take a move.

Oh and it might be better not to calculate LOS for every agent, as that's expensive. First try to see if it has any enemies close by. That can be optimized nicely by dividing the map into blocks, say 20x20 tiles each, and only check adjacent blocks for enemies.
« Last Edit: January 26, 2014, 07:25:43 PM by miki151 »
KeeperRL, Dungeon Keeper in roguelike style:
http://keeperrl.com

AgingMinotaur

  • Rogueliker
  • ***
  • Posts: 805
  • Karma: +2/-0
  • Original Discriminating Buffalo Man
    • View Profile
    • Land of Strangers
Re: AI for dynamic environments
« Reply #4 on: January 26, 2014, 09:04:44 PM »
reaver, if you haven't read it already, check out this interesting article series. I particularly like the state machine AI idea, and used that model in my own abandoned game Squirm, with data for different AI plugins stored in a file (kits/brains.yug in the source). Each state takes a quarry (called 'q'), which is the focus of attention of the active critter; when it's that critters turn, the state runs through a series of conditions. For instance, for a state like "zombi attack", the conditions and their triggers might be: * If 'q' is dead, mission accomplished, return to previous state, * if 'q' is in range to be attacked, do that, * if I can approach 'q' to come into attacking range, do that, * if 'q' is not in sight, explore to find 'q' (possibly heading for last known position of 'q', or scent, etc.), * if all else fails, wander aimlessly.

Some conditions trigger actions, other just change the state of a critter. This basic pattern can be scaled for more complex situations, including things like drinking potions, zapping wands, assigning quests, and going to the pub for a drink.

Oh and it might be better not to calculate LOS for every agent, as that's expensive. First try to see if it has any enemies close by. That can be optimized nicely by dividing the map into blocks, say 20x20 tiles each, and only check adjacent blocks for enemies.
In the system I used, whenever a critter performs an action, it checks to see if it's in line of sight of nearby critters, by checking a straight line for obstacles. Critters who see it, tests a series of conditions/triggers, based on their current state as well as their general world view (eg. an attacking state may have a condition to like anyone inflicting harm on your own enemy; a man-eating monster may be aggravated at the mere sight of any human whatsoever). This removes the load of performing a complete FOV calculation for every critter every round. Each critter keeps a register of other critters it knows about, with info on how well liked they are (bias). I found this kind of system could be made very flexible without getting too complex. It doesn't account for items lying inconspicuously around, though. I never got as far in Squirm, but a possible solution might be to let items emit a kind of beacon every three rounds or something, alerting nearby critters of their existence.

Just one of many plausible solutions.

As always,
Minotauros
This matir, as laborintus, Dedalus hous, hath many halkes and hurnes ... wyndynges and wrynkelynges.

Krice

  • (Banned)
  • Rogueliker
  • ***
  • Posts: 2316
  • Karma: +0/-2
    • View Profile
    • Email
Re: AI for dynamic environments
« Reply #5 on: January 26, 2014, 11:03:15 PM »
I just want to have something that behaves in a reasonable and expected way...

I would say it's not what you want. When AI behaves in clear algorithmic way it can have a weakness players can figure out. Sometimes it can be a reasonable feature, but I would just call it boring.

Trystan

  • Rogueliker
  • ***
  • Posts: 164
  • Karma: +0/-0
    • View Profile
    • my blog
Re: AI for dynamic environments
« Reply #6 on: January 27, 2014, 06:44:54 AM »
If I start with a hack, then any added behaviour would be hack-upon-hack-upon-hack etc etc.

I would suggest trying a hack-upon-hack-upon-hack approach.

I've tried state machines, personality stats, "needs" systems, and I think everything other than neural nets (please don't use neural nets). What I've found works best for me is a big old mess of if statements. It's the easiest to understand and tweak, gets good behavior with little code, is very flexible, and - if you use a bit of randomness - is a good mix of intelligent and unpredictable. If you find that you have a lot of duplication and obvious structure, then refactor. But don't use neural nets - it's a ton of work and I've never seen any remotely interesting AI come of it.

It's also useful to move logic from the creature class into the items, abilities, and spells. So creatures don't know how to use specific spells, but they ask all their spells if they should be used and how to use them. For example, you could have spells like these:
Name% chanceEffect
Heal100% - HP%Gain 10HP
Magic missile25% for each enemy in rangeCreate a Magic Missile targeting a random enemy in range
Blink25% if low HP and in range of melee or ranged attack, otherwise 0%Teleport nearby at random
Spellcasters go through their spells and either cast the first one they meet the criteria for or move on to the next step in the ai. I've got some code examples of how I did this for a 7DRL if you're interested.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #7 on: January 27, 2014, 09:58:32 AM »
Thanks for all the replies guys.

It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.

... and the trick is? :)

reaver, if you haven't read it already, check out this interesting article series ...

Thanks for the refs and suggestions Minotaure, I'll check the articles out. I'm not sure I got the FoV interaction that you described, but I guess that will come later on the development.

I've tried state machines, personality stats, "needs" systems, and I think everything other than neural nets (please don't use neural nets). What I've found works best for me is a big old mess of if statements.

I was thinking state machines actually for the higher level AI, but I have to read the AI articles on RogueBasin first to get some more info. The problem I described is a very low on AI ... high-levelness, but 'dangerous' in terms of computational costs if done carelessly.

It's also useful to move logic from the creature class into the items, abilities, and spells. So creatures don't know how to use specific spells, but they ask all their spells if they should be used and how to use them.

Thanks for the suggestion, it sounds a bit counter-intuitive though (splitting the AI/decision-making between the creature and its items), and spells/abilities are something that will probably come quite a bit later on :)

miki151

  • Rogueliker
  • ***
  • Posts: 264
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #8 on: January 27, 2014, 10:35:39 AM »
... and the trick is? :)
Have two arrays: distance and dirty, and a counter. Before every dijkstra run increase the counter. Reading and writing the distance is as follows:

Code: [Select]
setDistance(position, value) {
  distance[position] = value;
  dirty[position] = counter;
}

getDistance(position) {
  if (dirty[position] < counter)
   return infinity;
  else
    return distance[position];
}

dirty simply tells you if the value is left over from a previous run or if it's up to date. So increasing the counter is like initializing the array.
KeeperRL, Dungeon Keeper in roguelike style:
http://keeperrl.com

Quendus

  • Rogueliker
  • ***
  • Posts: 447
  • Karma: +0/-0
  • $@ \in \{1,W\} \times \{1,H\}$
    • View Profile
    • Klein Roguelikes
Re: AI for dynamic environments
« Reply #9 on: January 27, 2014, 11:24:54 AM »
The trick doubles the cost of each step in the pathfinding algorithm, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.

Cacheing behaviour on modern systems could also make the difference in performance difficult to predict - it's quite possible for the simpler solution to be faster. If you really need to maximise the number of Dijkstra scans you perform, it's better to profile the code and find what's really taking time. The results are often surprising.

One option on smaller, static maps is simply to precompute shortest paths between all nodes. For each node, compute a distance field and store it. You can find shortest paths implicitly by checking which of the adjacent nodes has the shortest distance to the target. This is relatively quick to compute, but takes a lot of space. I wouldn't recommend it on a map with more than 10,000 walkable nodes, and it's not compatible with mechanics like tunnelling and dynamic obstacles- except on even smaller maps.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #10 on: January 27, 2014, 11:45:19 AM »
I would agree with Quendus and go with profiling-on-demand

One option on smaller, static maps is simply to precompute shortest paths between all nodes. For each node, compute a distance field and store it.

I've been thinking about this, pre-calculating shortest paths in a dungeon, for each area (room/corridor), the paths from each connection (door) to each other connection. This would work nicely as an optimisation, but tunnelling breaks this.

I just want to have something that behaves in a reasonable and expected way...
I would say it's not what you want. When AI behaves in clear algorithmic way it can have a weakness players can figure out. Sometimes it can be a reasonable feature, but I would just call it boring.

Miscommunication here... behaving in a reasonable and expected way towards *me*, the developer, so I can understand what's going on, modify, vary with rand(), optimize, etc etc. I think the hack-on-hack is the best-first-search of programming, it's awesome for prototyping but when things get complicated, good luck trying to make sense, retrace and rewrite...
« Last Edit: January 27, 2014, 11:51:28 AM by reaver »

Quendus

  • Rogueliker
  • ***
  • Posts: 447
  • Karma: +0/-0
  • $@ \in \{1,W\} \times \{1,H\}$
    • View Profile
    • Klein Roguelikes
Re: AI for dynamic environments
« Reply #11 on: January 27, 2014, 12:06:29 PM »
Reading over the original post, it seems you want the following traits in your AI actors:
  • Fair play (information limited by field of vision)
  • Optimal play (making the best decisions possible)
  • Plausible play (not breaking the game rules)
  • Dozens of agents
  • Complex relationships between agents
  • Large map
  • Fast performance
and presumably:
  • Entertainment
  • Working prototype within a year
I'm not going to make broad statements about whether that's possible or not, but I think it might be a good idea to work out which of these are real priorities and which can be relaxed or dropped.

Regarding distance field precomputation and compatability with tunnelling: if map changes are rare events (for instance, only due to player action and not due to monsters who eat walls), then it's possible to update the map gradually, recomputing distance fields in a background thread, or between turns in groups of 10-50. You can swap the new distance field in when it's complete.

miki151

  • Rogueliker
  • ***
  • Posts: 264
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #12 on: January 27, 2014, 12:26:56 PM »
The trick doubles the cost of each step in the pathfinding algorithm
No it doesn't, the dominating operation in A* is popping from the priority queue.

Quote
, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.
The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
KeeperRL, Dungeon Keeper in roguelike style:
http://keeperrl.com

Quendus

  • Rogueliker
  • ***
  • Posts: 447
  • Karma: +0/-0
  • $@ \in \{1,W\} \times \{1,H\}$
    • View Profile
    • Klein Roguelikes
Re: AI for dynamic environments
« Reply #13 on: January 27, 2014, 12:36:57 PM »
The trick doubles the cost of each step in the pathfinding algorithm
No it doesn't, the dominating operation in A* is popping from the priority queue.
Sorry, I thought you were talking about Dijkstra and/or in general about clearing any array that needs to be updated regularly. After all, you didn't mention A* at all and your wording:
It's good to know the trick to clear an array in O(1) cost, so you don't have to initialize a 300x300 array for every dijkstra run.
Made it sound like the trick was much more broadly applicable. In any case, I'd file it under micro-optimisation.


Quote
Quote
, so you can only expect performance improvements if the number of tiles explored is less than half the number of tiles on the map. There's no improvement in asymptotic complexity unless the map is extremely sparse.
The OP was talking about agents with limited FoV on a large map (300x300), so the paths will be fairly short.
If you run Dijkstra on a small subregion of a large map, you can either do it on an array with equal size to the small subregion, or on the corresponding region of the whole array (with a rectangle describing which tiles the results are valid for). In either case, the amount of re-initialisation to be done is miniscule.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: AI for dynamic environments
« Reply #14 on: January 27, 2014, 12:46:21 PM »
Ok, my original post might have made it sound like I want everything and the kitchen sink, I might need to elaborate:

My idea is to group the creatures into two main categories of AI complexity:
  • Stupid to moderately smart : all monsters
  • NPC : characters that would play like the player

I'd like to have a good auto-play bot for the NPCs above, I don't care about perfection or true AI or whatever, but not visibly dumb.
I don't want either the monsters or NPC bots to cheat much, but not strictly prohibiting it.
I'd like to be able to run a number of those bots (a dozen or two) simultaneously in a level.
Levels could be large, and large levels with many smartsypants bots assume fast performance.

I don't think I'm asking the sky-and-stars, I'm just trying to think an extensible framework that can, for starters, satisfy *a few* of my requirements, and that I won't have to rewrite from scratch in a month.