Author Topic: Modeling the game world- Sensory Systems  (Read 11993 times)

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Modeling the game world- Sensory Systems
« on: April 23, 2013, 04:52:55 AM »
I'm working on creating a simple but thorough framework for roguelikes so that I can prototype a few that I've had jumbling around in my head. It's a fairly ambitious project, so I'm working to abstract as many concepts as possible into a simple construction.

I typically design feedback first, with a central focus on UI. If there is no UI for a feature or it does not yield any feedback, then it really doesn't matter anyway. In this spirit, I've begun working on creating an abstraction of agent perception, which is used for modeling AI.



In this model, the agent is both a controlling intelligence and an entity in the environment. Percepts are the information that the entity can interpret, from which the agent may evaluate and select from its actuators how it will act with the environment. I'm trying to reverse engineer the logic in the real world into a sensible and easy to work with abstraction. Within the agent, we can break this down even further into Sensory Systems and Sensations. The Sensory Systems describe what and to what degree an agent can acquire raw data, or Sensations, from the environment.

The five human Sensory Systems:
Vision - Sense of sight
Taction - Sense of touch
Audition - Sense of sound
Olfaction - Sense of smell
Gustation - Sense of taste

These Sensory Systems detect raw input for the following Sensations (in humans- other animals have different senses for different sensations),
Photoreception - Brightness and Color of light. (Vision)
Chemoception - Chemicals. (Olfaction, Gustation)
Nociception - Pain (All)
Electroreception - Electrical signals (All, not trusted by instinctual brain)
Mechanoreception - Physical interaction, including sound (Audition, Taction)
Thermoreception - Temperature (Taction)
Proprioception - Kinesthetic sense (Taction)
Equilibrioception - Balance (Audition)
Magnetoreception - Magnetic fields (Vision, not trusted by instinctual brain)
Chronorception - Time and circadian rhythms (All via Zeitgebers, mainly Vision via daylight)

Note: It's interesting that our pleasure from food is derived from the intersection of chemoception from our olfactory and gustatory systems-- If both the taste and the smell converge, we know that it is safe to eat, but if they diverge, it may not be and we find it unpleasant.


Is it important to distinguish Sensory Systems from Sensations? I think so, as we may want to utilize this abstraction layer for varying effects. In a simple game, we may just have a single Sensory System and a single Sensation (which is most roguelikes)- but we should be able to add and map as many as we want without problems. We also want the sensing abilities of each entity to vary in interesting ways- creating a distinction will allow us to input emissions/Sensations into the model and allow entity Sensory Systems to gather the data. We can create a mapping of Sensations to Sensory Systems- where the Sensory System describes an individual's ability to acquire Raw Data from the environment. For example, Photoreception has two basic properties- Brightness and Wavelength, which are typically defined via spectrum. Vision describes the range of brightness and wavelength that an entity can detect. We typically can't improve these Sensory Systems without artificial means, but we can improve our ability to evaluate these Sensations. Ultimately, we need to process these sensations before delivering them to the UI or AI for actuation. It then seems useful to abstract how these sensations are evaluated. I've come up with three basic super-modes.

Cognition - Conscious analysis of data.
Intuition - Subconscious inference of data.
Instinction - Unconscious mapping of data to evaluation.

While the quality of input can't be improved (except by artificial means), our processing of that data can be trained. It may not be important to have more than one evaluation mode for a game, but they help to rationalize certain elements of the game. One possible application may involve the perception of Ghosts. A ghost may provide little raw data to analyze or react to, but we may be able to intuit it regardless. Not by how strong our senses are, but by how sensitive we are to processing subtle influences. A few examples to emphasize the distinction:
Sympathy - Cognitive. We consciously rationalize how another person's state would feel (sensation reasoned within our imagination).
Empathy - Intuitive. We feel what another person's state actually is (sensation mapped to emotion).
Fear Sense - We can innately detect, on a continuum, how afraid a person is through a mapping from our sensory input. Doesn't provide information, just automatic reactions (sensation mapped to reaction)- chill down the spine, or a surge of hormones.

These concepts overlap in application, but different evaluations and weights may provide information that provides incentive to behave in a particular way. Since the player's avatar is the aspect of the player Agent that is acquiring information from the game world, it's also partially the Agent's job to interpret that data. From a UI point of view, providing the player with raw sensation data makes them responsible for interpreting the meaning. While this can be very interesting, we typically want the Agent's abilities, stats, skills, qualities, etc to provide the information. It could be a lot of fun to play a character that is manically afraid of all things, especially if the player doesn't know this. IE. Instead of a peaceful townsfolk, you suspect that they want to dismember you in a ritual sacrifice to their bloodthirsty deity-- okay, that may be TMI, but the avatar could fail to properly interpret their friendliness and trick the player into slaughtering them all. Evaluation includes misinformation-- which can be a lot of fun.

Another more sense-related example may be understanding why a sensation exists. The Aurora Borealis is the result of particle radiation (primarily from the sun) sneaking by the earth's magnetosphere and interacting with the atmosphere. Suppose our Avatar sees a flash of light- how is the avatar to evaluate that information for the player to make reasonable decisions from? The player will always be able to guess, but a well-designed game will not provide too great of an advantage to an experienced player (we don't want features to become arbitrary at different player skill levels). Is it a magical spell? Divine judgement? A flash of magnesium? Bright Light has meaning relative to that Avatar.

Telepathy could be rationalized without the addition of new senses or sensations, but as a property within an intuitive evaluation mode. IE- suppose that Telepathy is derived from Magnetoreception. The fields emitted by cognitive beings (higher wavelengths, or something) may have subtle effects on the ambient magnetic fields and, with enough training, we might be able to create an intuition about these subtle fluctuations- thereby inferring the presence of nearby entities. We may be able to develop a way to further evaluate this information to deduce what these entities are thinking. In many ways- cognition, intution, and instinction just describe different facets of the same ideas.

Creating meaningful evaluation modes really just depends upon how senses, sensations, and other factors are all mapped together. I probably wouldn't ever try to implement a realistic sensory model- but I thought the abstraction may be useful to others. There are some simple graphical techniques we can use to convey this data. 'X's are unknown, and any further letter/hue/saturation increases specificity. 'q'uadriped, 'd'og, 'p'ig, etc. Evaluation modes have much more to do with what emission patterns correspond to properties of creatures and objects-- that is, the evaluation modes are what tells us it's a humanoid or a goblin, friendly or hostile, dangerous or pathetic- etc.

To summarize:
Entities produce emissions, emissions are detected by sensory systems in the form of sensations, which are then rationalized by evaluation modes and presented to the UI or AI for decision making. Sensations that can't be completely rationalized are provided as raw data to the UI in a form that is relevant to that Sensory System (IE. if we hear something but don't know what it is, we might notify the map-- either as a direction, a specific location, or a general location-- maybe the type and intensity of the sound as well-- if we hear it over time, we may can filter the emissions to improve the specificity of the evaluations).

On the most mundane level, this is trivially implemented with singletons. Brogue's evaluation mode is total- anything you can detect you understand, but your Sensory Systems are limited to vision, clairvoyance, and telepathy. Your emission to the enemy is in the form of a heat map while their sensory system is described by their scent attribute. Stealth and dark places reduce your emission of heat (not formally heat, but an abstraction that describes your overall emission). You have perfect vision, apart from LOD in the form of light- so anything in your LOS/LOD you have perfect information about.

I imagine most any roguelike could be specified using this abstraction and done so in a manner that is both easy to implement, use by AI, and easy to communicate to the player.

guest509

  • Guest
Re: Modeling the game world- Sensory Systems
« Reply #1 on: April 23, 2013, 05:08:57 AM »
Holy crap dude!

My sensory system is a bit more, um, well here:

  if the_player is on_screen then move_toward_player.

 :)

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Modeling the game world- Sensory Systems
« Reply #2 on: April 23, 2013, 05:52:08 PM »
Holy crap dude!

My sensory system is a bit more, um, well here:

  if the_player is on_screen then move_toward_player.

 :)

Ah Jo! That's not a Sensory System, that's a heuristic!  :P

guest509

  • Guest
Re: Modeling the game world- Sensory Systems
« Reply #3 on: April 23, 2013, 09:39:15 PM »
Well man I had to actually google 'Heuristic', so that should tell you where I'm at. :-)

I can still make an okay game though.

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Modeling the game world- Sensory Systems
« Reply #4 on: April 23, 2013, 11:25:23 PM »
Well man I had to actually google 'Heuristic', so that should tell you where I'm at. :-)

I can still make an okay game though.

None of these 'words' matter-- it's just a way for us to intimidate ourselves out of actually making anything.

naughty

  • Rogueliker
  • ***
  • Posts: 59
  • Karma: +0/-0
    • View Profile
Re: Modeling the game world- Sensory Systems
« Reply #5 on: April 24, 2013, 08:08:04 AM »
This reminds me of an AI model called the Subsumption Architecture that a colleague used many moons ago on a unreleased game called Dino Hunter (shame it got canned, it was the best playing game I've ever worked on mainly due to how good the AI was).

The idea was originally put forward by an MIT AI researcher called Rodney Brooks. "Cambrian intelligence: The Early History of the New AI" is a good book he wrote on the subject.

Anyway the central idea is that most AI models put cognition into a big box between perceptions and actuators. What he proposed is to break the big box into modules which could use any of the gathered perceptual data and also give outputs to the actuators. The modules are totally separate from each other and in theory could be run in parallel.

The modules end up sounding like instincts, in Dino Hunter we had FightOrFlight, ObstacleAvoidance, Hunting and Flocking modules for example. All the modules have access to any perceptions they want to but also all the actuators. The modules actually voted on the actuators so for example the FightOrFlight module might be voting to run away from the T-Rex that's behind it, but the ObstacleAvoidance module votes to go around the tree you're about to run into. Normally you can just lerp between the actuator votes.

There was a hierarchy to the modules as well. FightOrFlight and ObstacleAvoidance were very high priority modules and could 'veto' or even temporarily shut off other modules. They represent very base instincts so it makes sense that they can do it.

...
Is it important to distinguish Sensory Systems from Sensations? I think so, as we may want to utilize this abstraction layer for varying effects.
...

This is what we had to do. The main prey in the game were pigs and deer (not very accurate for a Dinosaur game but we were in pre-prod, and squealing pigs are just too funny). We had blind, deaf and 'unable to smell' actors so we had to abstract what was perceived away from how it was perceived. We also had big dinosaurs that could eat the player.

Perceptions ended up being quite simple it was something like:

Code: [Select]
struct Perception {
    enum {
        THREAT,
        PREY,  // Depending on whether you were a herbivore or carnivore this could be grass or another dinosaur.
        ALLY,  // The prey animals could flock so needed to know where their friends are or were.
        WATER,
        OBSTACLE,
    } type;
    Vec3 direction;  // In local space to the actor perceiving.
    Vec3 velocity;
    float distance; // This could be negative to indicate unknown distance.
    float value;  // This was a bit hacky but was used to represent relative threat or the nutritional value of food.
};

We had systems that processed all the sounds, lines of sight and something like an influence map for smells that would create these perceptions for the actors depending on their senses, e.g. deaf actors didn't get any perceptions from sounds.

We also had the concept of perceptions that could linger for a while, e.g. when a herd of pigs were eating they couldn't see each-other (they are looking at the ground) but remembered where the other were for a while.

Now I'm all nostalgic for what could have been, that game was great.

guest509

  • Guest
Re: Modeling the game world- Sensory Systems
« Reply #6 on: April 24, 2013, 09:18:15 AM »
In KlingonRL all nonplayer ships have 1 of 2 states. See_Player = 0 or 1.

If the ship has weapons, then chase the player and try to bump attack.

If not, then run away, and of course avoid any obstacles while doing so.

So basically it's the same thing.  :)

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Modeling the game world- Sensory Systems
« Reply #7 on: April 24, 2013, 05:10:03 PM »
@Naughty, that's awesome! Exactly the sort of model I'm trying to get at. I was thinking that evaluation modes are what determine the type of information the environment reveals but also how much that information is trusted. We can then weight heuristic voting systems based upon these evaluations. Obviously, a player just gets information-- but I think being able to easily implement misinformation is some interesting value for enemy AI.


Lingering emissions are definitely important. I want entities to be able to map the relationship of the environment and objects moving throughout with their evaluation modes so that things like tracking can emerge from how the environment is changed and evaluated. When I walk through an area, the degree to which I modify the surrounding space is also the degree to which another entity may notice unnatural changes. I think it's safe to abstract lingering emissions into the tile or area that they occurred. There are some tricky things about it all to figure out, but weights and voting systems looks like the best way to go.

Eben

  • Rogueliker
  • ***
  • Posts: 339
  • Karma: +0/-0
  • Controversializer
    • View Profile
    • SquidPony!
Re: Modeling the game world- Sensory Systems
« Reply #8 on: April 25, 2013, 06:02:24 AM »
I'm all for overly complex systems, especially regarding AI!

However, will it make your game better or even be noticeably different than random actions? These are the two things that often kill the interestingness of such complex systems, so it's worth paying special attention to.

As far as the complexity goes, I've been considering a linear lighting system working off of sensitivity distributions to various wavelengths, have you considered that as well? Lots of interesting (maybe) things there like humans can't tell the difference between mixed red + blue and pure purple, but something with different receptors could... Biggest hurdle for me is that even if you figure out a good way to simulate that, you'll still have to use a false color system to present the difference back to the human who can't tell the difference on hardware that can't display the difference...

naughty

  • Rogueliker
  • ***
  • Posts: 59
  • Karma: +0/-0
    • View Profile
Re: Modeling the game world- Sensory Systems
« Reply #9 on: April 25, 2013, 08:01:57 AM »
@reverent:
...
 but I think being able to easily implement misinformation is some interesting value for enemy AI.

This is the huge advantage of separating perceptions from what is perceived. For example when playing as a herbivorous dinosaur we could make poisonous plants be perceived as nutritious or not depending on how good the dinosaur's sense of smell was. It's tricky to show this to the player though, we used crude vertex based rim lighting to indicate how 'good' food was, red meant poisonous and yellow was good food.

Once you get to things like hallucinations for the player (when you eat dodgy plants) it's a lot more complex and we got it working but it wasn't very tidy.

The one idea we never properly tried (because we thought it would be too expensive and cause lots of issues) was to actually use the perceptions directly to render things. For example if you had bad vision you want objects to slowly come into focus, rough silhouettes first then more detail getting filled in. Rendering that was not a priority at the time and considered a very steep challenge.

Consider something that can only perceive movement (like the T-Rex in the Jurassic Park films) rendering that seemed like both a cool concept but also very tricky.

...
 There are some tricky things about it all to figure out, but weights and voting systems looks like the best way to go.

The only caveat I would mention is that it can get very complicated if you have a lot of different modules. We  needed to put in quite a bit of debugging visualisation and logging to properly track down bugs. However the overall system just feels so intuitive, at least for the kind of game we were working on.

@Eben

However, will it make your game better or even be noticeably different than random actions? These are the two things that often kill the interestingness of such complex systems, so it's worth paying special attention to.

It's hard to answer this is a pure player for the game I worked on because I knew the code. However when you're a Velociraptor hunting down a herd of pigs you could feel that they were panicking and making mistakes. Being able to think in terms of a set of potentially conflicting and concurrently running instincts just made it a lot easier to get interesting behaviours out.

You could do this with 'simpler' systems like HFSMs or Behaviour Trees but you'd end up creating a lot of mixed or complicated states to cover for the lack of concurrency.