### Author Topic: Smart AI vs Expensive AI  (Read 47466 times)

#### Omnivore

• Rogueliker
• Posts: 154
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #60 on: July 02, 2014, 11:41:52 PM »
Ok, feeling stupid, consider:

Code: [Select]
`1..#...#..#.2.3..+......#...`
Now if I understood your explanation correctly, opening the door ('+') would mark cells 1, 2, and 3 as visible.  Yet even with the door open, 1 and 3 are not visible to each other and neither is 2 visible from position 1.  I'm missing something.

I thought that, working with pairs, you'd have to recalc everything that can see beyond the door as well as the door itself?
« Last Edit: July 02, 2014, 11:54:21 PM by Omnivore »

#### reaver

• Rogueliker
• Posts: 207
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #61 on: July 02, 2014, 11:59:40 PM »
Ok, feeling stupid, consider:

Code: [Select]
`1..#...#..#.2.3..+4.....#...`
Now if I understood your explanation correctly, opening the door ('+') would mark cells 1, 2, and 3 as visible.  Yet even with the door open, 1 and 3 are not visible to each other and neither is 2 visible from position 1.  I'm missing something.

By opening the door, the way I described it above, you will only update pairs that have (+) as one of their points. So, (1,2), (2,3), (1,3) etc are not gonna change. But as you can see, my assumption that the update only needs to calculate a single FoV is wrong, as (3,4) is now visible, where it was not before.

So, as a correction, in order to do a proper update, if (+) changes its obstacle status, we update the bitbuffer as follows:
Code: [Select]
`for each point 'p' around '+' with distance to '+' <  maxSightRadius    p_disk = all points around 'p' with distance to 'p' <  maxSightRadius    for each point 'p2' in p_disk:        is_visible = LoS(p,p2)        store_to_bitbuffer(p, p2, is_visible)`
So updates are costlier, but not massively so. Unless you have a really big maxSightRadius

#### Omnivore

• Rogueliker
• Posts: 154
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #62 on: July 03, 2014, 12:04:46 AM »
Makes sense now, thanks

#### mushroom patch

• Rogueliker
• Posts: 554
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #63 on: July 03, 2014, 12:30:07 AM »
Thanks again, reaver. Interesting data.

• Rogueliker
• Posts: 83
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #64 on: July 03, 2014, 04:40:04 AM »
Fascinating stuff, unfortunately I suspect DDA is a pathologically poor fit for this algorithm
Max view distance is 60
Mostly open environments.
Partial obscuration.
Don't care much about monster vision.
Maps are tiled, so we'd need to shift and partially regenerate the cache frequently.
Frequent visibility changes (smoke and other obscurants)
Special reasons*

The side issue of complexity of doing bresenham to all tiles reminded me I had generated this while looking into it previously: https://gist.github.com/kevingranade/5153746 and related https://gist.github.com/kevingranade/5678549
The number is the number of visits to each cell when drawing a line to each point on the border.  Obviously this was unacceptable, as it had terrible artifacts in addition to being not all that fast.  It's a pretty pattern though ^_^

In the end I went with a recursive shadowcasting algorithm.  You've reminded me I've been meaning to bit-pack the working data for that, the overhead of the additional addressing operations is nearly guaranteed to be trivial compared to being able to fit more of your working set in cache.
Don't forget, your working set for a very fast algorithm should fit in your processor's data cache, not just ram.
For general data, don't worry about it too much, but if you're iterating over something a LOT, saving bits matters.

*Post coming soon about an interesting FoV hack I pulled off in DDA

#### reaver

• Rogueliker
• Posts: 207
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #65 on: July 03, 2014, 08:14:56 AM »
Fascinating stuff, unfortunately I suspect DDA is a pathologically poor fit for this algorithm
...
Indeed, it's as bad a fit can be from your requirements. For such larger grids, you'd be better of with multi-resolution grids. Or I'd start looking what RTS games do, where they have even larger grids and view distances

*Post coming soon about an interesting FoV hack I pulled off in DDA

/popcorn

#### mushroom patch

• Rogueliker
• Posts: 554
• Karma: +0/-0
##### Re: Smart AI vs Expensive AI
« Reply #66 on: July 03, 2014, 12:03:40 PM »
Indeed, it's as bad a fit can be from your requirements. For such larger grids, you'd be better of with multi-resolution grids. Or I'd start looking what RTS games do, where they have even larger grids and view distances

Here's another can of worms: Has anyone ever tried to do line of sight/field of view in a roguelike using binary space partitions in two dimensions the way the DOOM engine did? I've always thought you could get a lot of milage out of this approach if you're dealing with large, complex rooms. Should be pretty damn fast. On the other hand, it's a pretty complicated undertaking to code something like that.

(In particular, this would involve a non-grid based internal map representation, with walls defined by line segments, etc.)
« Last Edit: July 03, 2014, 12:05:58 PM by mushroom patch »