Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - requerent

Pages: 1 ... 6 7 [8] 9 10 ... 24
106
Programming / Re: pedantic object oriented question
« on: April 26, 2013, 10:39:18 PM »
Quote
It may not be intuitive to think that way in real-world terms, but it is much cleaner OO. You ask your world to move everything that needs to be moved, just like you ask your main game loop to update everything that needs updated.


In OOP, an object should only modify its own data members. However, the Critter will typically be composed in the Map (Assuming that the Map is your state managing object). The Critter also needs to tell the map where to move, and the map needs to tell the Critter where it can move, and the Critter needs to look at the map to decide where to move.

No matter where you put the move method, there is an exchange of information that obfuscates the idea of what is actually managing the move.


Quote
Honestly there is not a language in existence that has a proper scope system, in my opinion. Almost all the problems I have with programming are caused by where to keep data and the juggling required to get those data to the proper functions. It's not a technical limitation; it's a conceptual one. I know I could throw any old system together and force it to work. The trick is whether it is intuitive to think about or not.

Lua works well for this IMO.

An excellent approach to making a game is to make it so that game data is acted upon only by pure functions without the use of methods.

107
Also, when you talk about "windows" and "UI menus" do you mean the OS's windows/menus, or menus/popups drawn inside the game? As if it's the latter, then what kind of "framework" would one use that would allow for easily providing the necessary render routines to render that stuff?

Okay--- This is where things can get tricky. Any application is designed to work with some rendering medium, which is windowed by a Window Manager.

A Window Manager, for an operating system, has implementation specific features that govern how it works. If you use the API for an operating system, you can create additional windows tied to your application but have window decorators. .NET, GTK, QT, and Java, for example, provide comprehensive UI frameworks for desktop integration of your window management. This allows us to manage each window from the desktop. We don't care about that crap for games, but its useful for applications.

Within a game, we may have a side-bar screen that shows player information, another screen that shows a minimap, and another that shows the player's position in the game world. All of these are separate screens or 'windows.' However, the focus is on the player's position screen. When we open our inventory, we open a new window, focus shifts, and now we're in 'inventory item selection'-mode, or something. Point is, we don't need a complex window managing system to make this look good in the game, we just need a concise one- typically something simple.

I don't have anything in mind if you're set on C++-- it really just depends on what APIs you're working with. I'm sure there are some good SDL/OpenGL ones out there, but I don't know about curses. Most curses programmers, afaik, just allot portions of the screen for specific data and may not bother with a window managing abstraction...

Now that I think about it further-- it may be better to do it yourself (sorry for going back on what I just said >_<). I mean-- all you have to do is create a way to manage which OBJECT in your application has input focus. This can be pretty simple for a roguelike. Press 'i' while on the game screen and the inventory screen pops up and input focus shifts. For an RL this shouldn't be too difficult to rationalize.

You can create a 'window' or 'screen' abstract base class which the input manager stores a reference to whichever is active. From there, it's like your standard finite state machine.

108
We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

But how does the window transfer the input to the relevant entity? Doesn't it need to know about entities to do that? Should that knowledge/dependency exist?

And doesn't any game that's more than trivial need UI menus?


Yea- but the issue of UI menus, input and windows is basically a solved problem. There isn't a good reason to do it from scratch. Do you want to make a game, or do you want to make an application framework?


As far as windowing and stuff-- Basically, you pop a window open- the top level window is what catches input. A 'window' is just an abstraction to make a distinction between input focus (because the window also provides feedback, it will communicate with the renderer too).

Input ----> Window ---->  Logic ---> Renderer ----> [Window] <----- Input

So the window is an abstraction that facilitates the interfacing of input to logic, which then results in some kind of  rendering feedback.

The input manager will catch specific keys first-- like 'esc' to immediately exit or something. Then it will pass the input down to the window, which may also use it-- then it will call a method of the game object that corresponds to that key-binding.

I'm not explaining it well because I don't have too much experience programming these meta-application features. A framework should do all of this for you.

109
Programming / Re: Picking The Right Language/Game Engine For My Idea
« on: April 24, 2013, 07:26:58 PM »
Eve most definitely uses Python for scripting and native for heavy lifting. Newer games have begun to move away from python to lua-- IE civ4 used python for scripting but civ5 uses lua for scripting-- but both are still relevant.

110
Quote
However, then that means the render system deals with whole assets and not just graphical primitives, right? But how does that jibe with what the person on that gamedev.net site was saying?

Assets ARE data structures of graphical primitives. They are a set of instructions to the Render System as to how this particular asset is produced out of graphical primitives. The supplementary data provided by the graphics component tells us what transformations are applied to that data before/after it's position is evaluated in real space*.

In other words, the Render System only deals with graphical primitives, but receives instructions in the form of assets and transformations. To be clear, a transformation is typically in the form of the traditional scale, translation, and rotation matrices (but there are others that we produce from a combination of these- like shearing and... stuff >_>).

* This can get a little confusing, because our components are supposed to be shared whenever possible. We don't want to put graphical transformations on a graphics component if that same information is used by logic (such as rotation and location). HOWEVER, sometimes we actually want the logical position and the mesh position to be different-- not often, but sometimes. This just gets into implementation details. The spirit of components remains the same.

Quote
And, I suppose, the RenderSystem also stores a copy of a pointer to the AssetLibrary so it can access it, right?

Yep! You CAN store the reference to the asset on the entity instead of the key for the library, but only processes related to Asset Loading should modify that reference. Most any specialized modifications or transformations of assets for a particular special entity are handled in its various graphics components.

Quote
Doesn't the input manager then also have to know about all possible places it can send the input? Like it has to know about the pile of entities, so it can send input there. It needs to know about the windows on the display, so it can send input there. But isn't that making too many dependencies?

We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

Quote
So then event systems are fundamentally tied to the "inheritance" method of construction, where you subclass entities to get different kinds of entities?

Yes, event systems are all about letting disjunct classes do all of the work through a messaging system. Component systems aren't that different, except that all messaging is isolated to relevant components in relevant systems- so there is no classical inheritance scheme. For game logic, it's all to accomplish the same thing, components just tend to give us a more transparent awareness and simplicity into specifying the functionality of our game.

Since classes do the logic in event systems, we can easily make design mistakes-- like, say, the application of a stun effect. Where should it go? Well, the instigator should send a message to another entity to be stunned. The recipient of the stun message then determines whether it should be stunned or not (or do something else). This represents a continuum of game logic that really doesn't need to be happening in two different places.


Nonetheless, we aren't tied into stupid hierarchies with event systems, we just have to use its power intelligently http://en.wikipedia.org/wiki/Composition_over_inheritance.

Quote
Was looking at the links you gave -- I notice the method described there has an "update" function on the systems and components, which seems it's designed for a real-time game. How does this change in a turn-based game like a roguelike?

His game engine isn't necessarily real-time, it's tick-based (by default, real-time ticks). All you would do, in the Controller System, where input/AI is mapped to actions, is just throw a 'waitForPlayerAction' in there and your game is now turn-based... well, almost. You need to update each entity per system, instead of update each system per entity.



Quote
The biggest mistake with components IMO is to confuse game things with code objects. That leads you to the question like you had before of 'where do I put quit-game()'? Just because you have an entity system where entities are 'game things' does not mean that your source code objects are limited to describing just entities, their managers, and the systems that act on them.

His source code uses an event system for component interactions. It's okay to have both for GAME LOGIC, but you need to be clear about what produces and handles events and what gets handled in systems. A global game message might be simpler to deal with as an event, but a global game message isn't typically going to effect entities that don't have relevant components.

But yea-- you need to make a distinction between Game Logic Events, Application Events, and possibly even System Events. You can structure an entire application as a component system, but the DOP is designed for data-driven logic, like a game.

111
Programming / Re: Modeling the game world- Sensory Systems
« on: April 24, 2013, 05:10:03 PM »
@Naughty, that's awesome! Exactly the sort of model I'm trying to get at. I was thinking that evaluation modes are what determine the type of information the environment reveals but also how much that information is trusted. We can then weight heuristic voting systems based upon these evaluations. Obviously, a player just gets information-- but I think being able to easily implement misinformation is some interesting value for enemy AI.


Lingering emissions are definitely important. I want entities to be able to map the relationship of the environment and objects moving throughout with their evaluation modes so that things like tracking can emerge from how the environment is changed and evaluated. When I walk through an area, the degree to which I modify the surrounding space is also the degree to which another entity may notice unnatural changes. I think it's safe to abstract lingering emissions into the tile or area that they occurred. There are some tricky things about it all to figure out, but weights and voting systems looks like the best way to go.

112
Programming / Re: Picking The Right Language/Game Engine For My Idea
« on: April 23, 2013, 11:26:12 PM »
I thought I would drop this here, but Lua and Love2D are good choices to work with as well.

 :)

LOL- I was just looking into that-- I really like the look of it. Very smooth, clean, and succinct- with all the power of Lua right there.

113
Programming / Re: Modeling the game world- Sensory Systems
« on: April 23, 2013, 11:25:23 PM »
Well man I had to actually google 'Heuristic', so that should tell you where I'm at. :-)

I can still make an okay game though.

None of these 'words' matter-- it's just a way for us to intimidate ourselves out of actually making anything.

114
Quote
Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?


An asset library is just a data structure of references of models/textures/sounds that are available to use. The keys to access that data are stored on the graphics components of your entities. Nymphaea suggests a spritebatch, which is usually just an asset library all stored in a single asset image. Your GraphicsComponent will store the key of the asset and any additional information needed to render (location, facing, textures, animation sets, etc).

So then you'd suggest to store keys, not the actual graphics, in the GraphicsComponent? (Right now it stores an actual graphic (here just an ASCII character) + position to render it at.) What would have access to this graphics library? Would the RenderSystem (mentioned below) have access to it?

Your RenderWorld routine is technically in an acceptable place. Traditional game loops just iterate through entities calling entity.update() to determine the logic of the game and entity.render() to display it.

However, this may work against the design of a component system. You should try breaking all of your logic up into systems that work only with relevant components. Your engine is then just something that manages which systems are currently active.

Consider,
Code: [Select]
if(it->second->hasComponent("graphic"))
You might create a system that stores references to all relevant components. A RenderSystem, for example, may store a list of GraphicsComponents- and just iterate through those, as it doesn't need to know anything about the rest of the entity. In this way, RenderSystem can either be instanced inside of your WorldManager (if your WorldManager handles the game loop), or be instanced in the game loop directly (or you can even thread it so that rendering is done asynchronously).

So then this "RenderSystem" is something different from what that person on the post I linked to was talking about which could only be passed low-level graphical data (lower-level than "models", though I wasn't sure what that referred to -- if it referred to meshes, then that'd seem like individual triangles or something), which in this case would seem to be just ASCII characters, or sprites for tile graphics?

But then something has to get the graphics components off the entity to feed them into the RenderSystem. Which means there'd need to be a routine, perhaps on the RenderSystem itself, that "registers" an entity with it by putting its GraphicsComponents on the list.


Nope, it isn't different.

In 3D games, the list of triangles for a mesh will be stored in specialized data structures (really just a list of vertices, normals, and UVs). We don't need to 'copy' this information to an object before rendering it. What if two entities had the same mesh? Why would you want to store multiple copies of the same mesh in memory? We can just store the reference to that mesh on an object (or really, just the key/index of that mesh in the asset list). It isn't quite that simple though- we also want to store transformation information, material data, and other things pertinent to that particular object. It's just that in the case of the actual asset, we never store it on an object. We store transformation information in the component and the mesh in the library.


  • AssetLibrary stores Data
  • GraphicsComponent stores references to AssetLibrary and other transformation information
  • RenderSystem stores a list of references to GraphicsComponents, iterates through and draws them to the screen accordingly.

It's awkward in a roguelike where there is only a single character to store, but you may want to do it anyway for creating consistency among different racial types- then you can localize color definitions within the graphics component or create another entry into your AssetLibrary for various colors/characters to be used in consistent ways.



Quote
Quote
There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it?

This is where some troublesome entanglements can happen. Suppose the player is accessing UI menus that have nothing or little to do with game logic- how should that work? Should those UI commands also be routed through the player's entity instance? Certainly not, but that doesn't mean that we can't have a reference in the component. Components are just a way to keep your game logic discrete. The player component having access to the input manager doesn't really violate this in any way. Wait for input is part of the the player's game logic, but there should be an input handler on top of that. For example,

Quote
(remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)?

What if we have real-time graphical effects? Should the player be prevented from accessing the menu or quitting the game? Probably not. Look into keybinding libraries. Instead of a 'waitForKeyPress' you'll have a 'waitForAction', or something. Where the input handler will send an actionID to the player's entity for actuation.

Yes, this is a simple program that doesn't have such things. In that case, though, there's then essentially two different types of "time" running -- there's "real time" which determines the running graphics, and then there's "game time", which determines the time lapsing in the game world (and the events represent things happening in the game world). Real time elapses continuously (well, at the frame rate of the graphics being drawn), while "game time" elapses in a turn-wise manner. So it'd make sense to separate those two somehow.

The concern about UI menus is something I was wondering about, too, since in a bigger game I'd have menus.

You may also want to take advantage of idle processing while the player is waiting to input data. For example-- In complex AI calculations we might be able to improve the experience of the game if we're always crunching AI algorithms even while waiting for the player. Don't get carried away though- just do what makes sense.

An inputManager on top of the game will use keybindings and 'screen focus' to determine where to send the input.


Quote
Quote
Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

I think that component systems and event systems don't mesh very well. The whole point of components is to isolate where the components interact with one another. You shouldn't need messages, events, or handlers-- it should all just happen in the systems relevant to those components. If your systems are arranged in a meaningful Finite State Machine, you can pass information between systems by updating components. This doesn't mean you can't use them together-- just that when you mix them, it can be hard to tell what should go where.

Technically, EVT_QUIT could be handled by any entity- the most trivial example could be by freeing memory. Regardless, if you're going down the event path, you may want to create, at the very least, an event handling abstract base class for every object in the game to use. You can override what you need for any given object, and forget the rest. You can also create a hierarchy of abstractions to more reasonably organize handlers. You don't want to start defining classes for different entity types, just for different game objects to handle different events. This will allow you to handle events and use components in a happier way.

So what would better mesh with events? Also, the entity would free its memory when its destructor is called. When quit is fired, it means the game is shutting down, so this has to lead to an exit of the game loop.

What would better mesh with components, insofar as getting "things that happen in the game world" to go?

An event can be handled by more than one object. Or is my nomenclature off >_>. Maybe not handled by- but react to? Whatever the case, when an event occurs, anything with an onThatTypeOfEvent method should be called on relevant objects.

We may want to do something special on a particular entity for an onQuit event. What that is, who knows- but we should be able to do it if we want to.


Well- component systems do their own work themselves. The idea is that you store relevant data in components and specific system updates the components relative to one another. You don't need events, messaging, or any of that stuff- the systems immediately handle whatever messages/events would take place. In the case that they don't, we just go to another system.

For example-- our ControllerSystem polls the controller of an entity to figure out what heuristic should govern the next action. Then we might switch to a HeuristicAnalysisSystem that runs the heuristic algorithm to determine what action should take place.

This dude does a good job of breaking it down- I stumbled upon it recently. He has numerous articles on his blog that describe how it works, though I don't really like the way he did it (I flamed him for his implementation of state management >_<). I'd read them both for some insight on the what, how and why's of components. It's well-written and very clear.

http://www.richardlord.net/blog/what-is-an-entity-framework
http://www.richardlord.net/blog/why-use-an-entity-framework

Note that this isn't a 'pure' component system, just one that does a pretty good job of hybridizing classes and components in a way that is natural for OOP style.

Quote
Quote
Also, how would one make the part of the program that handles stuff like collisions/interactions between entities

All you have to do is create a collision event when a move takes place and something already occupies the space. You then call the respective onCollide handlers for those entities and determine what should happen. Should we attack? Pick up an object? Or bump senselessly into a wall? Perhaps we don't want the player to lose their turn if they bump into a wall-- etc. Entity doesn't need any information about the map-- it just acts as a way to store data for the player's avatar and for the player to interact with the game model.

Where is said onCollide handler? If it's in the entity object, then to make it do different things for different kinds of entity we need to make subclasses of Entity, which defeats the whole idea of the component system! In which case we might as well just scrap it and go for a traditional inheritance-based entity system.

And what spawns the collision event? It has to know about the movement the entity is going to make (which is determined by AI), the positions of all entities on the map, and a collision occurs.

And how would one do that wall-bump thing, anyway, when in the current setup the loop cycles through once the action event is handled, and another one is scheduled, which marks another turn?

That's how event systems work. I mean, I've never programmed one, but I've used many event-driven APIs, and that's how they're structured.

Event Object is produced and added to a queue. Event System then takes an event and evaluates it by calling whatever respective methods for that event exist on relevant entities (which may include more entities than those colliding).

You can handle collision either in a component way or an event way- or possibly both. The event way is to make your collisionEvent and then call onCollide variants in the entities.


Did you get the idea to use event systems on 3d gaming forums? We like event systems for real-time game engines because they allow us to hide a lot of really complicated shit but still code custom behaviors into our entities directly and expect it to work. If YOU are making the engine, event systems are, IMO, a PITA.

So... In a component system, game logic takes place in a system. In an event system, your events are handled in systems but the logic is relative to the implementation of an entity.

115
Programming / Re: Modeling the game world- Sensory Systems
« on: April 23, 2013, 05:52:08 PM »
Holy crap dude!

My sensory system is a bit more, um, well here:

  if the_player is on_screen then move_toward_player.

 :)

Ah Jo! That's not a Sensory System, that's a heuristic!  :P

116
Quote
Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?


An asset library is just a data structure of references of models/textures/sounds that are available to use. The keys to access that data are stored on the graphics components of your entities. Nymphaea suggests a spritebatch, which is usually just an asset library all stored in a single asset image. Your GraphicsComponent will store the key of the asset and any additional information needed to render (location, facing, textures, animation sets, etc).

Your RenderWorld routine is technically in an acceptable place. Traditional game loops just iterate through entities calling entity.update() to determine the logic of the game and entity.render() to display it.

However, this may work against the design of a component system. You should try breaking all of your logic up into systems that work only with relevant components. Your engine is then just something that manages which systems are currently active.

Consider,
Code: [Select]
if(it->second->hasComponent("graphic"))
You might create a system that stores references to all relevant components. A RenderSystem, for example, may store a list of GraphicsComponents- and just iterate through those, as it doesn't need to know anything about the rest of the entity. In this way, RenderSystem can either be instanced inside of your WorldManager (if your WorldManager handles the game loop), or be instanced in the game loop directly (or you can even thread it so that rendering is done asynchronously).

Quote
There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it?

This is where some troublesome entanglements can happen. Suppose the player is accessing UI menus that have nothing or little to do with game logic- how should that work? Should those UI commands also be routed through the player's entity instance? Certainly not, but that doesn't mean that we can't have a reference in the component. Components are just a way to keep your game logic discrete. The player component having access to the input manager doesn't really violate this in any way. Wait for input is part of the the player's game logic, but there should be an input handler on top of that. For example,

Quote
(remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)?

What if we have real-time graphical effects? Should the player be prevented from accessing the menu or quitting the game? Probably not. Look into keybinding libraries. Instead of a 'waitForKeyPress' you'll have a 'waitForAction', or something. Where the input handler will send an actionID to the player's entity for actuation.


Quote
Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

I think that component systems and event systems don't mesh very well. The whole point of components is to isolate where the components interact with one another. You shouldn't need messages, events, or handlers-- it should all just happen in the systems relevant to those components. If your systems are arranged in a meaningful Finite State Machine, you can pass information between systems by updating components. This doesn't mean you can't use them together-- just that when you mix them, it can be hard to tell what should go where.

Technically, EVT_QUIT could be handled by any entity- the most trivial example could be by freeing memory. Regardless, if you're going down the event path, you may want to create, at the very least, an event handling abstract base class for every object in the game to use. You can override what you need for any given object, and forget the rest. You can also create a hierarchy of abstractions to more reasonably organize handlers. You don't want to start defining classes for different entity types, just for different game objects to handle different events. This will allow you to handle events and use components in a happier way.

Quote
Also, how would one make the part of the program that handles stuff like collisions/interactions between entities

All you have to do is create a collision event when a move takes place and something already occupies the space. You then call the respective onCollide handlers for those entities and determine what should happen. Should we attack? Pick up an object? Or bump senselessly into a wall? Perhaps we don't want the player to lose their turn if they bump into a wall-- etc. Entity doesn't need any information about the map-- it just acts as a way to store data for the player's avatar and for the player to interact with the game model.

Quote
And in a bigger game, where you can pick stuff up, what's the best way to distinguish a “pickupable” entity from a non-pickupable one?

EdibleItemComponent, EquippableItemComponent, PickupItemComponent-- etc. An inventory management system should figure out how an item can be used/gathered based upon the information in its components. You could have just an 'ItemComponent' with all of those details.

117
Programming / Re: Picking The Right Language/Game Engine For My Idea
« on: April 23, 2013, 01:22:11 PM »
Implementations of the Python language have lackluster performance. Some algorithms that take a few seconds in C can take minutes in Python. Python's expressiveness makes it ridiculously easy to whip up applications very quickly, but if you want any polish you'll need to create a C library and call it from python so that your performance sensitive code can run native. Libtcod already has a lot of the C rogue work done http://doryen.eptalys.net/libtcod/ and has Python bindings.


118
Programming / Modeling the game world- Sensory Systems
« on: April 23, 2013, 04:52:55 AM »
I'm working on creating a simple but thorough framework for roguelikes so that I can prototype a few that I've had jumbling around in my head. It's a fairly ambitious project, so I'm working to abstract as many concepts as possible into a simple construction.

I typically design feedback first, with a central focus on UI. If there is no UI for a feature or it does not yield any feedback, then it really doesn't matter anyway. In this spirit, I've begun working on creating an abstraction of agent perception, which is used for modeling AI.



In this model, the agent is both a controlling intelligence and an entity in the environment. Percepts are the information that the entity can interpret, from which the agent may evaluate and select from its actuators how it will act with the environment. I'm trying to reverse engineer the logic in the real world into a sensible and easy to work with abstraction. Within the agent, we can break this down even further into Sensory Systems and Sensations. The Sensory Systems describe what and to what degree an agent can acquire raw data, or Sensations, from the environment.

The five human Sensory Systems:
Vision - Sense of sight
Taction - Sense of touch
Audition - Sense of sound
Olfaction - Sense of smell
Gustation - Sense of taste

These Sensory Systems detect raw input for the following Sensations (in humans- other animals have different senses for different sensations),
Photoreception - Brightness and Color of light. (Vision)
Chemoception - Chemicals. (Olfaction, Gustation)
Nociception - Pain (All)
Electroreception - Electrical signals (All, not trusted by instinctual brain)
Mechanoreception - Physical interaction, including sound (Audition, Taction)
Thermoreception - Temperature (Taction)
Proprioception - Kinesthetic sense (Taction)
Equilibrioception - Balance (Audition)
Magnetoreception - Magnetic fields (Vision, not trusted by instinctual brain)
Chronorception - Time and circadian rhythms (All via Zeitgebers, mainly Vision via daylight)

Note: It's interesting that our pleasure from food is derived from the intersection of chemoception from our olfactory and gustatory systems-- If both the taste and the smell converge, we know that it is safe to eat, but if they diverge, it may not be and we find it unpleasant.


Is it important to distinguish Sensory Systems from Sensations? I think so, as we may want to utilize this abstraction layer for varying effects. In a simple game, we may just have a single Sensory System and a single Sensation (which is most roguelikes)- but we should be able to add and map as many as we want without problems. We also want the sensing abilities of each entity to vary in interesting ways- creating a distinction will allow us to input emissions/Sensations into the model and allow entity Sensory Systems to gather the data. We can create a mapping of Sensations to Sensory Systems- where the Sensory System describes an individual's ability to acquire Raw Data from the environment. For example, Photoreception has two basic properties- Brightness and Wavelength, which are typically defined via spectrum. Vision describes the range of brightness and wavelength that an entity can detect. We typically can't improve these Sensory Systems without artificial means, but we can improve our ability to evaluate these Sensations. Ultimately, we need to process these sensations before delivering them to the UI or AI for actuation. It then seems useful to abstract how these sensations are evaluated. I've come up with three basic super-modes.

Cognition - Conscious analysis of data.
Intuition - Subconscious inference of data.
Instinction - Unconscious mapping of data to evaluation.

While the quality of input can't be improved (except by artificial means), our processing of that data can be trained. It may not be important to have more than one evaluation mode for a game, but they help to rationalize certain elements of the game. One possible application may involve the perception of Ghosts. A ghost may provide little raw data to analyze or react to, but we may be able to intuit it regardless. Not by how strong our senses are, but by how sensitive we are to processing subtle influences. A few examples to emphasize the distinction:
Sympathy - Cognitive. We consciously rationalize how another person's state would feel (sensation reasoned within our imagination).
Empathy - Intuitive. We feel what another person's state actually is (sensation mapped to emotion).
Fear Sense - We can innately detect, on a continuum, how afraid a person is through a mapping from our sensory input. Doesn't provide information, just automatic reactions (sensation mapped to reaction)- chill down the spine, or a surge of hormones.

These concepts overlap in application, but different evaluations and weights may provide information that provides incentive to behave in a particular way. Since the player's avatar is the aspect of the player Agent that is acquiring information from the game world, it's also partially the Agent's job to interpret that data. From a UI point of view, providing the player with raw sensation data makes them responsible for interpreting the meaning. While this can be very interesting, we typically want the Agent's abilities, stats, skills, qualities, etc to provide the information. It could be a lot of fun to play a character that is manically afraid of all things, especially if the player doesn't know this. IE. Instead of a peaceful townsfolk, you suspect that they want to dismember you in a ritual sacrifice to their bloodthirsty deity-- okay, that may be TMI, but the avatar could fail to properly interpret their friendliness and trick the player into slaughtering them all. Evaluation includes misinformation-- which can be a lot of fun.

Another more sense-related example may be understanding why a sensation exists. The Aurora Borealis is the result of particle radiation (primarily from the sun) sneaking by the earth's magnetosphere and interacting with the atmosphere. Suppose our Avatar sees a flash of light- how is the avatar to evaluate that information for the player to make reasonable decisions from? The player will always be able to guess, but a well-designed game will not provide too great of an advantage to an experienced player (we don't want features to become arbitrary at different player skill levels). Is it a magical spell? Divine judgement? A flash of magnesium? Bright Light has meaning relative to that Avatar.

Telepathy could be rationalized without the addition of new senses or sensations, but as a property within an intuitive evaluation mode. IE- suppose that Telepathy is derived from Magnetoreception. The fields emitted by cognitive beings (higher wavelengths, or something) may have subtle effects on the ambient magnetic fields and, with enough training, we might be able to create an intuition about these subtle fluctuations- thereby inferring the presence of nearby entities. We may be able to develop a way to further evaluate this information to deduce what these entities are thinking. In many ways- cognition, intution, and instinction just describe different facets of the same ideas.

Creating meaningful evaluation modes really just depends upon how senses, sensations, and other factors are all mapped together. I probably wouldn't ever try to implement a realistic sensory model- but I thought the abstraction may be useful to others. There are some simple graphical techniques we can use to convey this data. 'X's are unknown, and any further letter/hue/saturation increases specificity. 'q'uadriped, 'd'og, 'p'ig, etc. Evaluation modes have much more to do with what emission patterns correspond to properties of creatures and objects-- that is, the evaluation modes are what tells us it's a humanoid or a goblin, friendly or hostile, dangerous or pathetic- etc.

To summarize:
Entities produce emissions, emissions are detected by sensory systems in the form of sensations, which are then rationalized by evaluation modes and presented to the UI or AI for decision making. Sensations that can't be completely rationalized are provided as raw data to the UI in a form that is relevant to that Sensory System (IE. if we hear something but don't know what it is, we might notify the map-- either as a direction, a specific location, or a general location-- maybe the type and intensity of the sound as well-- if we hear it over time, we may can filter the emissions to improve the specificity of the evaluations).

On the most mundane level, this is trivially implemented with singletons. Brogue's evaluation mode is total- anything you can detect you understand, but your Sensory Systems are limited to vision, clairvoyance, and telepathy. Your emission to the enemy is in the form of a heat map while their sensory system is described by their scent attribute. Stealth and dark places reduce your emission of heat (not formally heat, but an abstraction that describes your overall emission). You have perfect vision, apart from LOD in the form of light- so anything in your LOS/LOD you have perfect information about.

I imagine most any roguelike could be specified using this abstraction and done so in a manner that is both easy to implement, use by AI, and easy to communicate to the player.

119
Programming / Re: info files
« on: April 23, 2013, 03:52:13 AM »
Lua data files are just table definitions, which are super easy to understand and require no parser.

120
Programming / Re: Another Crash and Burn (But I'm cool with it!)
« on: April 22, 2013, 04:35:48 AM »
Nice. Thanks for the link.

Krice was saying something similar with Kaduria. It's hard to finish something if you don't know what the end is. Like trying to travel to the ends of the earth. Where is that exactly? Does a spheroid actually end?

It begins at index 0 and ends at length-1. Well, that's if salient points (in this case spline anchors) are enumerated. Otherwise, the alpha of the expression that defines the spheroid beginning at 0 and ending at 1.

Pages: 1 ... 6 7 [8] 9 10 ... 24