Author Topic: Smart AI vs Expensive AI  (Read 54632 times)

Zireael

  • Rogueliker
  • ***
  • Posts: 604
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #15 on: June 24, 2014, 06:49:44 AM »
I know of two games which have really smart AI, but I don't know how expensive it is in terms of calculations:
1. Sil
2. Incursion

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #16 on: June 24, 2014, 06:11:49 PM »
Regarding the graph-like data structures: One major beef I have with them is that if you start digging and changing your level, it's a PITA to rebuild. Also in some cases (caves, town), it's completely unintuitive how would you go generating the graphs.

An alternative would be organizing your data in a spatial hierarchy. The simplest that I can think and I'm planning to implement is a quadtree, as it requires zero storage cost for the nodes - every cell at every level can be derived implicitly. The costs will pretty much be updating the hierarchy when stuff moves, but if you restrict the number of levels (e.g. each node has a 4x4 subgrid of children) the costs will reduce.

chooseusername

  • Rogueliker
  • ***
  • Posts: 329
  • Karma: +0/-0
    • View Profile
    • Email
Re: Smart AI vs Expensive AI
« Reply #17 on: June 24, 2014, 09:15:59 PM »
If a player cannot observe a behavior it is irrelevant behavior, likely indistinguishable from pure randomness or other simple fakery.  Games are not simulations. 

This is a bit of a truism. I don't think anyone is talking about AI with no observable effects. Of course, it's important to take on board the criticism that complex/subtle AI can easily look indistinguishable from fairly mindless AI, as it seems people in this thread have.
I'm with mushroom patch on this.  Games are simulations.  Perhaps abstracted simulations, but simulations still.  The reason you have mana and hit points, is because your simulation is at a higher level and it's easier to design and program.

One of the things that people love about Incursion and want to see more of, is the non-observed behaviour.  They mention loving seeing signs that other life is going on in the dungeon.  Now Incursion has that behaviour happening as if the player was there, but that is irrelevant and is the easy way to do it.  Harder and more optimal would be abstracting it, and doing so would lessen the chance of the effect seen by the player being believable.  Simulating is something games do not do enough of, and for good reason.

mushroom patch

  • Rogueliker
  • ***
  • Posts: 554
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #18 on: June 25, 2014, 12:39:11 AM »
Regarding the graph-like data structures: One major beef I have with them is that if you start digging and changing your level, it's a PITA to rebuild. Also in some cases (caves, town), it's completely unintuitive how would you go generating the graphs.

An alternative would be organizing your data in a spatial hierarchy. The simplest that I can think and I'm planning to implement is a quadtree, as it requires zero storage cost for the nodes - every cell at every level can be derived implicitly. The costs will pretty much be updating the hierarchy when stuff moves, but if you restrict the number of levels (e.g. each node has a 4x4 subgrid of children) the costs will reduce.

Yeah, anything you precompute may need to be recomputed to some degree with updates. (As an aside: In my opinion, many roguelikes go way overboard with cheap digging -- too fast, sometimes instantaneous with spells and wands that are easy to get, few limitations, etc. I mean, we're talking about excavating a volume of earth, often solid rock, measuring 5'x5'x8' or so for each tile, in other words, many, many tons of material...) In this case, though, I don't think the graph I've described needs to be rebuilt, just updated. But, sure, you'd need to write a bit more code to cover alterations to the dungeon.
« Last Edit: June 25, 2014, 06:19:18 AM by mushroom patch »

mushroom patch

  • Rogueliker
  • ***
  • Posts: 554
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #19 on: June 25, 2014, 06:30:33 AM »
One of the things that people love about Incursion and want to see more of, is the non-observed behaviour.  They mention loving seeing signs that other life is going on in the dungeon.  Now Incursion has that behaviour happening as if the player was there, but that is irrelevant and is the easy way to do it.  Harder and more optimal would be abstracting it, and doing so would lessen the chance of the effect seen by the player being believable.  Simulating is something games do not do enough of, and for good reason.

Sorry about the double post, but I'm supposed to be writing a research statement, which means I'm in procrastination overdrive. I think this is right, although I'm not sure I get what's being said in the last two sentences. I agree with others in the thread that there are serious issues with designing a game as just a slice of a larger simulation, particularly performance related ones. On the other hand, simulation is great for creating flavor, depth, and realism (whatever that means in the context of the game). The answer seems to be finding ways to cheaply approximate simulation.

Omnivore

  • Rogueliker
  • ***
  • Posts: 154
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #20 on: June 25, 2014, 10:20:49 PM »
One of the things that people love about Incursion and want to see more of, is the non-observed behaviour.  They mention loving seeing signs that other life is going on in the dungeon.  Now Incursion has that behaviour happening as if the player was there, but that is irrelevant and is the easy way to do it.  Harder and more optimal would be abstracting it, and doing so would lessen the chance of the effect seen by the player being believable.  Simulating is something games do not do enough of, and for good reason.

The difference between computer simulations and computer games is subtle but important. At the core, the distinction is that simulations are about things (or systems) and how they behave, and games are about a fun user experience.

When I said: "If a player cannot observe a behavior it is irrelevant behavior, likely indistinguishable from pure randomness or other simple fakery.", the pure randomness or other simple fakery portion refers to creating the illusion of some complex behind the scenes simulation.  I disagree that faking is somehow more expensive or harder than running a real simulation.

Lets put this in a context we can all see and understand; a persistent dungeon level which a player returns to after N number of turns.  There are three approaches to handling this:
1) Ignore it and revive the level just as it was.
2) Run N turns of simulation to bring the simulation and the player frames of reference into sync.
3) Fake 2 by a simple yet intelligent application of randomness.

I believe approach #3 is preferable because it is simple to implement in terms of programmer effort, less demanding of resources especially as N increases, and largely indistinguishable from the observable results of #2. 

If I understand you correctly, I believe we really only differ on the level of abstraction.  The higher the level of abstraction you apply to approach #2, the more it becomes approach #3.  You can run an immensely detailed simulation, but if I as a player see only a small edge of it, it is largely wasted. 

Bringing this all back around to the OP's questions; The less abstract simulation is the more expensive the AI implementations become.  Consider that for a detailed simulation you treat the mobiles as Actors who are indistinguishable from players except for source of control input.   They have their own FoV maps, complex decision making, pathing, etc, which runs every turn regardless of whether they are close to the player or on the far side of the map.  Yes you can try to make this less expensive by caching various precalculations and deciding what mutations of the game state require recalculation of which portion of the various caches.  I believe this is needless complexity compounding the original design error.

Alternatively, doing minimal calculations - extending and reusing the player's FoV, you can easily predict what mobiles will be observable within a move or two.  At the point of transition between observable and non-observable you can apply whatever effects you like to make it look like some larger events have occurred behind the scenes - the wounded mobile stumbles into view.  The simulation backstory takes place in the player's mind just as it would if you truly ran a full sim.  If you desire the appearance of a larger encompassing simulation, you can track the observed 'faked' simulation events and weight future abstractions by them. 

In other words, if you truly must have a simulation running in the background, abstract it to the highest degree you can get away with and keep it as separate from the AI as you possibly can.  Spend the cycles you have regained on smarter decision making when you need it.  The most sophisticated simulated world you could possibly run will be rarely glimpsed by the player, the responsiveness of the game and the presence or absence of unintuitive limitations and unexpected behaviors are far more noticeable.

chooseusername

  • Rogueliker
  • ***
  • Posts: 329
  • Karma: +0/-0
    • View Profile
    • Email
Re: Smart AI vs Expensive AI
« Reply #21 on: June 26, 2014, 02:35:44 AM »
When I said: "If a player cannot observe a behavior it is irrelevant behavior, likely indistinguishable from pure randomness or other simple fakery.", the pure randomness or other simple fakery portion refers to creating the illusion of some complex behind the scenes simulation.  I disagree that faking is somehow more expensive or harder than running a real simulation.

Lets put this in a context we can all see and understand; a persistent dungeon level which a player returns to after N number of turns.  There are three approaches to handling this:
1) Ignore it and revive the level just as it was.
2) Run N turns of simulation to bring the simulation and the player frames of reference into sync.
3) Fake 2 by a simple yet intelligent application of randomness.

I believe approach #3 is preferable because it is simple to implement in terms of programmer effort, less demanding of resources especially as N increases, and largely indistinguishable from the observable results of #2. 

If I understand you correctly, I believe we really only differ on the level of abstraction.  The higher the level of abstraction you apply to approach #2, the more it becomes approach #3.
Yes, for the most part that is correct.  Where it isn't correct, both here and for the rest of your post, is that you seem to portray a level of simulation as an either/or choice.

A ticked simulation (approach #2) where AI/NPCs are all modelled the same way as the player, can be more expensive --  but only if you do it badly.  And in the same way, a higher level simulation where (approach #3) AI are modelled in a simpler way, can be just as expensive -- but only if you do it badly.

I also suggest that you can have different levels of abstraction, with parts that matter using modelling and parts that don't using fakery.  And that you can dynamically switch in different levels when it matters.  Also there are umpteen games where player's have started playing and talked amongst themselves saying "wow these NPCs are really smart", but after a while the fakery becomes apparent and obvious.

Your post is long and full of claims, most of which are simplistic and not necessarily true.  I suspect that you have a way you like to make games, and you shape your posts around it.   You're welcome to it, as is whomever chooses to adopt your your positioned views.

Kevin Granade

  • Rogueliker
  • ***
  • Posts: 83
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #22 on: June 28, 2014, 06:36:56 AM »
Lets put this in a context we can all see and understand; a persistent dungeon level which a player returns to after N number of turns.  There are three approaches to handling this:
1) Ignore it and revive the level just as it was.
2) Run N turns of simulation to bring the simulation and the player frames of reference into sync.
3) Fake 2 by a simple yet intelligent application of randomness.

I believe approach #3 is preferable because it is simple to implement in terms of programmer effort, less demanding of resources especially as N increases, and largely indistinguishable from the observable results of #2. 

If I understand you correctly, I believe we really only differ on the level of abstraction.  The higher the level of abstraction you apply to approach #2, the more it becomes approach #3.  You can run an immensely detailed simulation, but if I as a player see only a small edge of it, it is largely wasted. 
I mostly agree with this, with a clarification you may well be attempting to imply.
Whether you go with 2 or 3 is an implementation detail, the important distinction is whether your system is one that makes sense to the player.  For example a basic distinction is whether the scrambling of the level respects conservation of stuff.  If you add or remove monsters and items, you're making a particular choice along the realism axis.  Personally I prefer to emphasize realism, so I'd generally limit myself to moving non-consumable items around and having monsters consume consumables, but discarding that to enable a particular style of game is also a valid option.

Regarding FoV, to be concrete about how you would "cheat" with monster vision.  You can simply store references to things monsters are interested in on a convenient data structure, cull for distance, and check visibility of the items directly.  It would rarely be rewarding to do a full FoV calculation for a monster, the player is a special case due to rendering more than anything else.  In that special case you're required to determine visibility of relevant and irrelevant alike since it will bother the player if it's inconsistent, hence the need for FoV precalculation.

There are probably circumstances where a monster might want to check visibility of enough things that FoV calculation is the cheaper alternative, but I can't think of one.

AdamStrange

  • Newcomer
  • Posts: 35
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #23 on: June 28, 2014, 06:57:39 AM »
there's also the actual code element to this:
It is simple to say "Monster check FOV for interesting things".
The code for it would depend on how you are storing things - lets assume a single monster, a list of items with their location or are items stored in the map list.

code:
for each item check if in view range
if in view range check if in view (walls would prevent seeing)

that is fine for a small number of items, but becomes intensive for large numbers of items and also large numbers of monsters. you really need a solution that does it in the fewest possible steps.

FOV by tiles is very expensive..

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #24 on: June 28, 2014, 11:16:59 PM »
Regarding FoV, to be concrete about how you would "cheat" with monster vision.  You can simply store references to things monsters are interested in on a convenient data structure, cull for distance, and check visibility of the items directly.  It would rarely be rewarding to do a full FoV calculation for a monster, the player is a special case due to rendering more than anything else.  In that special case you're required to determine visibility of relevant and irrelevant alike since it will bother the player if it's inconsistent, hence the need for FoV precalculation.

That's a nice idea and works well with sparsely distributed "things" in a map (other AI actors, items, doors, features, whatever) -- the sparser the things of interest are, the more the value over direct FoV calc. You still need to cheat differently for monster path planning, as the "wall-or-floor" data is dense (assuming every tile has the walkable flag set or sth similar). Unless you assume monster knowledge of the map, of course.

mushroom patch

  • Rogueliker
  • ***
  • Posts: 554
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #25 on: June 29, 2014, 04:25:23 AM »
Path planning shouldn't require cheating. A single computation, while potentially expensive, yields enough information for many turns of movement if you're using the usual algorithms...

Kevin Granade

  • Rogueliker
  • ***
  • Posts: 83
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #26 on: June 29, 2014, 05:39:29 AM »
I don't see a good reason to not give the monster full wall data unless the player is manipulating it, and even then, you treat the player modifications as an exception rather than doing FoV for everything.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #27 on: June 29, 2014, 07:43:43 AM »
Path planning shouldn't require cheating. A single computation, while potentially expensive, yields enough information for many turns of movement if you're using the usual algorithms...

Well, path planning for every single monster using each monster's proper visibility (history of what has been visible, what's currently visible) can get quite expensive... unless you have 5 monsters on the map or something, you know a magic algorithm that does the above and I do not.

mushroom patch

  • Rogueliker
  • ***
  • Posts: 554
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #28 on: June 29, 2014, 09:12:48 AM »
Path planning shouldn't require cheating. A single computation, while potentially expensive, yields enough information for many turns of movement if you're using the usual algorithms...

Well, path planning for every single monster using each monster's proper visibility (history of what has been visible, what's currently visible) can get quite expensive... unless you have 5 monsters on the map or something, you know a magic algorithm that does the above and I do not.

I fail to see what monster vision has to do with pathfinding. Do you decide how to get places based only on what you can see at the moment? I know I don't. Monsters live in dungeons. Assuming they have perfect information about dungeon layouts is totally reasonable and realistic.

Unless you're programming for a PDP-6, running something like A-star on average once per 10 monster turns (and remember, turns are probably not the smallest subdivision of time in a reasonable roguelike) with 100 or so monsters isn't going to cause your magnetic cores to catch fire.

reaver

  • Rogueliker
  • ***
  • Posts: 207
  • Karma: +0/-0
    • View Profile
Re: Smart AI vs Expensive AI
« Reply #29 on: June 29, 2014, 05:24:08 PM »
I fail to see what monster vision has to do with pathfinding. Do you decide how to get places based only on what you can see at the moment? I know I don't. Monsters live in dungeons. Assuming they have perfect information about dungeon layouts is totally reasonable and realistic.

Unless you're programming for a PDP-6, running something like A-star on average once per 10 monster turns (and remember, turns are probably not the smallest subdivision of time in a reasonable roguelike) with 100 or so monsters isn't going to cause your magnetic cores to catch fire.

Come on, ditch the same old story for a second, you might want to create some AI that tries to explore the dungeon like you, part of a team or not. Perhaps you're making hunger games the roguelike and all your 100 players need to have fair vision. I'm talking about the few cases (not all monsters everywhere in the game) that you want to have fair vision for exploration. My bad for using the term monster for that.

Anyway, so yeah referring to A* with known layout, sure that's easy and fast.