Author Topic: Playing around with a new kind of program architecture: "component system"  (Read 38075 times)

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
Hi.

I just finished programming an attempt at a “component-based” roguelike system, for practice. It was done over the last 4 days. You can get it here:

http://www.mediafire.com/?61s2vlh9dqiz73v

The code is rough and isn't meant for production work, just for practice, and I've got some questions because there were a number of things I wasn't quite sure how to do “right”, especially not with the new style of architecture I was triying here. It's a simple “curses”-based terminal program and the compile script is for Linux and similar systems – since I use Linux for most of my computing and programming work. Though it shouldn't be too hard to modify it to make it compile on Windows as a console app, provided you can supply the curses functionality with a suitable library (I haven't tried it on Windows, though, or with MS's compilers).

The architecture also uses an “event-driven system” for the main game loop.

Sorry about the restrictive license, but that's because it's rough, unfinished, and for practice and not really serious work. I hope you understand.

Now, the questions:

There is a “renderWorld” routine in the WorldManager, which doesn't seem right – this is related to rendering... where should it be? It needs the data from the world to render it (entities, map, etc.). I also heard at http://www.gamedev.net/topic/618973-creating-a-render-manager/#entry4905254 that a graphics system should only take in basic graphical objects and not more sophisticated objects like entities or “models”, etc. (the linked post says: “Likewise, a model has no place within the graphics module. (...) a general theme is that the graphics module simply handles the commands needed to render things, and provides wrappers for textures, vertex buffers (...)”). The RenderManager works in just that way: the primitive objects it receives (RenderObjects) store simple graphical representations (here grids of characters) only. Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?

There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it? If not, then does that mean we should add more code to the event handler to check if the acting entity is the player entity and capture the input there and then pass it on to the entity, thereby avoiding the reference to the input manager but adding more complexity/specialization in the event handling system? But if that's bad too, where should we put the calls to the input manager (remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)? Already we have a problem in that the “quit” command isn't really something the entity does, but a signal to the game to stop!

Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

Also, how would one make the part of the program that handles stuff like collisions/interactions between entities, with tiles on the map (right now, there is no collision handling and I only have an “ad hoc” stop in there (clearly marked!) to keep the player from running off the map altogether! Obviously that is a big no-no in a production program! It's just a placeholder until I can get how to work this out.), etc.? It doesn't seem right to make the physic component of an entity have full access to the map and all entities on it (i.e. essentially the whole WorldManager), does it? If it isn't right, then what should I do (that “ad hoc”-marked code is really just a placeholder because I'm not sure how to handle this dependency problem)?

And in a bigger game, where you can pick stuff up, what's the best way to distinguish a “pickupable” entity from a non-pickupable one?

I'd also like to hear a general review/comment/critique/etc. of the code – any pointers for improvement would be welcome.

Nymphaea

  • Rogueliker
  • ***
  • Posts: 74
  • Karma: +0/-0
  • Maria Fox
    • View Profile
    • Nymphaea.ca
Far from an expert, and I haven't looked at the code, but I've been looking into this myself :P

For the rendering, what I've seen other games do, and what I plan to do, is to have a "SpriteBatch" class built into the engine, not a component, and each component is passed the current instance of SpriteBatch each tick if it needs to draw something. Then just have a "SpriteBatch.render()" command to actually draw the changes after everything got it's chance. For a curses based game, you could have an array of characters, and an array of foreground/background colours in the SpriteBatch for simplicity.

So in your case, your components would have a "render(SpriteBatch sb)" method, which would be run at the end of each tick by the engine.

For the AI, just have the player AI do nothing, and the game pause while waiting for input. Then just manually tell the AI "move(DIR_LEFT)" or whatever when the left key is pressed.

For events, you could just have a global event handler that uses id's to tell what the message is for. Then have entities go "EventHandler.getEvents(my_id)" to get a list of all events referencing their id. -1 could be for game specific events(quit) and 0+ can be entity events.

Collisions... Well you could use events for this too, if the entity wants to move it gives the event to the map, the map checks if it is possible, if it is it will send an event back saying the movement was successful with the updated coordinates.

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Quote
Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?


An asset library is just a data structure of references of models/textures/sounds that are available to use. The keys to access that data are stored on the graphics components of your entities. Nymphaea suggests a spritebatch, which is usually just an asset library all stored in a single asset image. Your GraphicsComponent will store the key of the asset and any additional information needed to render (location, facing, textures, animation sets, etc).

Your RenderWorld routine is technically in an acceptable place. Traditional game loops just iterate through entities calling entity.update() to determine the logic of the game and entity.render() to display it.

However, this may work against the design of a component system. You should try breaking all of your logic up into systems that work only with relevant components. Your engine is then just something that manages which systems are currently active.

Consider,
Code: [Select]
if(it->second->hasComponent("graphic"))
You might create a system that stores references to all relevant components. A RenderSystem, for example, may store a list of GraphicsComponents- and just iterate through those, as it doesn't need to know anything about the rest of the entity. In this way, RenderSystem can either be instanced inside of your WorldManager (if your WorldManager handles the game loop), or be instanced in the game loop directly (or you can even thread it so that rendering is done asynchronously).

Quote
There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it?

This is where some troublesome entanglements can happen. Suppose the player is accessing UI menus that have nothing or little to do with game logic- how should that work? Should those UI commands also be routed through the player's entity instance? Certainly not, but that doesn't mean that we can't have a reference in the component. Components are just a way to keep your game logic discrete. The player component having access to the input manager doesn't really violate this in any way. Wait for input is part of the the player's game logic, but there should be an input handler on top of that. For example,

Quote
(remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)?

What if we have real-time graphical effects? Should the player be prevented from accessing the menu or quitting the game? Probably not. Look into keybinding libraries. Instead of a 'waitForKeyPress' you'll have a 'waitForAction', or something. Where the input handler will send an actionID to the player's entity for actuation.


Quote
Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

I think that component systems and event systems don't mesh very well. The whole point of components is to isolate where the components interact with one another. You shouldn't need messages, events, or handlers-- it should all just happen in the systems relevant to those components. If your systems are arranged in a meaningful Finite State Machine, you can pass information between systems by updating components. This doesn't mean you can't use them together-- just that when you mix them, it can be hard to tell what should go where.

Technically, EVT_QUIT could be handled by any entity- the most trivial example could be by freeing memory. Regardless, if you're going down the event path, you may want to create, at the very least, an event handling abstract base class for every object in the game to use. You can override what you need for any given object, and forget the rest. You can also create a hierarchy of abstractions to more reasonably organize handlers. You don't want to start defining classes for different entity types, just for different game objects to handle different events. This will allow you to handle events and use components in a happier way.

Quote
Also, how would one make the part of the program that handles stuff like collisions/interactions between entities

All you have to do is create a collision event when a move takes place and something already occupies the space. You then call the respective onCollide handlers for those entities and determine what should happen. Should we attack? Pick up an object? Or bump senselessly into a wall? Perhaps we don't want the player to lose their turn if they bump into a wall-- etc. Entity doesn't need any information about the map-- it just acts as a way to store data for the player's avatar and for the player to interact with the game model.

Quote
And in a bigger game, where you can pick stuff up, what's the best way to distinguish a “pickupable” entity from a non-pickupable one?

EdibleItemComponent, EquippableItemComponent, PickupItemComponent-- etc. An inventory management system should figure out how an item can be used/gathered based upon the information in its components. You could have just an 'ItemComponent' with all of those details.

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
Quote
Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?


An asset library is just a data structure of references of models/textures/sounds that are available to use. The keys to access that data are stored on the graphics components of your entities. Nymphaea suggests a spritebatch, which is usually just an asset library all stored in a single asset image. Your GraphicsComponent will store the key of the asset and any additional information needed to render (location, facing, textures, animation sets, etc).

So then you'd suggest to store keys, not the actual graphics, in the GraphicsComponent? (Right now it stores an actual graphic (here just an ASCII character) + position to render it at.) What would have access to this graphics library? Would the RenderSystem (mentioned below) have access to it?

Your RenderWorld routine is technically in an acceptable place. Traditional game loops just iterate through entities calling entity.update() to determine the logic of the game and entity.render() to display it.

However, this may work against the design of a component system. You should try breaking all of your logic up into systems that work only with relevant components. Your engine is then just something that manages which systems are currently active.

Consider,
Code: [Select]
if(it->second->hasComponent("graphic"))
You might create a system that stores references to all relevant components. A RenderSystem, for example, may store a list of GraphicsComponents- and just iterate through those, as it doesn't need to know anything about the rest of the entity. In this way, RenderSystem can either be instanced inside of your WorldManager (if your WorldManager handles the game loop), or be instanced in the game loop directly (or you can even thread it so that rendering is done asynchronously).

So then this "RenderSystem" is something different from what that person on the post I linked to was talking about which could only be passed low-level graphical data (lower-level than "models", though I wasn't sure what that referred to -- if it referred to meshes, then that'd seem like individual triangles or something), which in this case would seem to be just ASCII characters, or sprites for tile graphics?

But then something has to get the graphics components off the entity to feed them into the RenderSystem. Which means there'd need to be a routine, perhaps on the RenderSystem itself, that "registers" an entity with it by putting its GraphicsComponents on the list.

Does this "RenderSystem" only work with GraphicsComponents -- as the map, for example, is not an entity with a GraphicsComponent, yet it still has to be rendered?

Also, if a new entity is created or an old one destroyed (monster killed, etc.) in the game world -- which is in the WorldManager -- then the RenderSystem needs to be updated to reflect that. Would WorldManager's "renderWorld" function feed entities into the RenderSystem, just as it feeds their graphics components to RenderManager now, but no longer having to know about the components within the entity?

Quote
There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it?

This is where some troublesome entanglements can happen. Suppose the player is accessing UI menus that have nothing or little to do with game logic- how should that work? Should those UI commands also be routed through the player's entity instance? Certainly not, but that doesn't mean that we can't have a reference in the component. Components are just a way to keep your game logic discrete. The player component having access to the input manager doesn't really violate this in any way. Wait for input is part of the the player's game logic, but there should be an input handler on top of that. For example,

Quote
(remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)?

What if we have real-time graphical effects? Should the player be prevented from accessing the menu or quitting the game? Probably not. Look into keybinding libraries. Instead of a 'waitForKeyPress' you'll have a 'waitForAction', or something. Where the input handler will send an actionID to the player's entity for actuation.

Yes, this is a simple program that doesn't have such things. In that case, though, there's then essentially two different types of "time" running -- there's "real time" which determines the running graphics, and then there's "game time", which determines the time lapsing in the game world (and the events represent things happening in the game world). Real time elapses continuously (well, at the frame rate of the graphics being drawn), while "game time" does not (it elapses like how the "gameTime" variable in the current program elapses). So it'd make sense to separate those two somehow.

The concern about UI menus is something I was wondering about, too, since in a bigger game I'd have menus. Would their rendering be handled by the same RenderSystem as everything else, or what? (which would mean the RenderSystem would have to accept more than just GraphicsComponents -- unless each UI window has a GraphicsComponent in it, but of a different type from that used in entities)

Quote
Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

I think that component systems and event systems don't mesh very well. The whole point of components is to isolate where the components interact with one another. You shouldn't need messages, events, or handlers-- it should all just happen in the systems relevant to those components. If your systems are arranged in a meaningful Finite State Machine, you can pass information between systems by updating components. This doesn't mean you can't use them together-- just that when you mix them, it can be hard to tell what should go where.

Technically, EVT_QUIT could be handled by any entity- the most trivial example could be by freeing memory. Regardless, if you're going down the event path, you may want to create, at the very least, an event handling abstract base class for every object in the game to use. You can override what you need for any given object, and forget the rest. You can also create a hierarchy of abstractions to more reasonably organize handlers. You don't want to start defining classes for different entity types, just for different game objects to handle different events. This will allow you to handle events and use components in a happier way.

So what would better mesh with events? Also, the entity would free its memory when its destructor is called. When quit is fired, it means the game is shutting down, so this has to lead to an exit of the game loop.

What would better mesh with components, insofar as getting "things that happen in the game world" to go?

Quote
Also, how would one make the part of the program that handles stuff like collisions/interactions between entities

All you have to do is create a collision event when a move takes place and something already occupies the space. You then call the respective onCollide handlers for those entities and determine what should happen. Should we attack? Pick up an object? Or bump senselessly into a wall? Perhaps we don't want the player to lose their turn if they bump into a wall-- etc. Entity doesn't need any information about the map-- it just acts as a way to store data for the player's avatar and for the player to interact with the game model.

Where is said onCollide handler? If it's in the entity object, then to make it do different things for different kinds of entity we need to make subclasses of Entity, which defeats the whole idea of the component system! In which case we might as well just scrap it and go for a traditional inheritance-based entity system.

And what spawns the collision event? It has to know about the movement the entity is going to make (which is determined by AI), the positions of all entities on the map, and a collision occurs.

And how would one do that wall-bump thing, anyway, when in the current setup the loop cycles through once the action event is handled, and another one is scheduled, which marks another turn?

Quote
And in a bigger game, where you can pick stuff up, what's the best way to distinguish a “pickupable” entity from a non-pickupable one?

EdibleItemComponent, EquippableItemComponent, PickupItemComponent-- etc. An inventory management system should figure out how an item can be used/gathered based upon the information in its components. You could have just an 'ItemComponent' with all of those details.

This makes sense.
« Last Edit: April 23, 2013, 09:56:14 PM by mike3 »

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Quote
Yet something has to get that lower-level data (in this case, ASCII representation, in that case, triangles/textures/etc.) from the higher-level entity/map objects and feed it into the render/graphics system: what should it be?


An asset library is just a data structure of references of models/textures/sounds that are available to use. The keys to access that data are stored on the graphics components of your entities. Nymphaea suggests a spritebatch, which is usually just an asset library all stored in a single asset image. Your GraphicsComponent will store the key of the asset and any additional information needed to render (location, facing, textures, animation sets, etc).

So then you'd suggest to store keys, not the actual graphics, in the GraphicsComponent? (Right now it stores an actual graphic (here just an ASCII character) + position to render it at.) What would have access to this graphics library? Would the RenderSystem (mentioned below) have access to it?

Your RenderWorld routine is technically in an acceptable place. Traditional game loops just iterate through entities calling entity.update() to determine the logic of the game and entity.render() to display it.

However, this may work against the design of a component system. You should try breaking all of your logic up into systems that work only with relevant components. Your engine is then just something that manages which systems are currently active.

Consider,
Code: [Select]
if(it->second->hasComponent("graphic"))
You might create a system that stores references to all relevant components. A RenderSystem, for example, may store a list of GraphicsComponents- and just iterate through those, as it doesn't need to know anything about the rest of the entity. In this way, RenderSystem can either be instanced inside of your WorldManager (if your WorldManager handles the game loop), or be instanced in the game loop directly (or you can even thread it so that rendering is done asynchronously).

So then this "RenderSystem" is something different from what that person on the post I linked to was talking about which could only be passed low-level graphical data (lower-level than "models", though I wasn't sure what that referred to -- if it referred to meshes, then that'd seem like individual triangles or something), which in this case would seem to be just ASCII characters, or sprites for tile graphics?

But then something has to get the graphics components off the entity to feed them into the RenderSystem. Which means there'd need to be a routine, perhaps on the RenderSystem itself, that "registers" an entity with it by putting its GraphicsComponents on the list.


Nope, it isn't different.

In 3D games, the list of triangles for a mesh will be stored in specialized data structures (really just a list of vertices, normals, and UVs). We don't need to 'copy' this information to an object before rendering it. What if two entities had the same mesh? Why would you want to store multiple copies of the same mesh in memory? We can just store the reference to that mesh on an object (or really, just the key/index of that mesh in the asset list). It isn't quite that simple though- we also want to store transformation information, material data, and other things pertinent to that particular object. It's just that in the case of the actual asset, we never store it on an object. We store transformation information in the component and the mesh in the library.


  • AssetLibrary stores Data
  • GraphicsComponent stores references to AssetLibrary and other transformation information
  • RenderSystem stores a list of references to GraphicsComponents, iterates through and draws them to the screen accordingly.

It's awkward in a roguelike where there is only a single character to store, but you may want to do it anyway for creating consistency among different racial types- then you can localize color definitions within the graphics component or create another entry into your AssetLibrary for various colors/characters to be used in consistent ways.



Quote
Quote
There is a reference to the input manager in the player's “AI component”, which is needed because for the player entity to make decisions when its it's turn to act, it needs a command from the user. But it doesn't seem right it should “know about” the input system, does it?

This is where some troublesome entanglements can happen. Suppose the player is accessing UI menus that have nothing or little to do with game logic- how should that work? Should those UI commands also be routed through the player's entity instance? Certainly not, but that doesn't mean that we can't have a reference in the component. Components are just a way to keep your game logic discrete. The player component having access to the input manager doesn't really violate this in any way. Wait for input is part of the the player's game logic, but there should be an input handler on top of that. For example,

Quote
(remember: we don't want input until the player's “turn to act” comes around, i.e. when the EVT_ENTITY_ACTION event with the actingEntity variable equaling the player entity is due)?

What if we have real-time graphical effects? Should the player be prevented from accessing the menu or quitting the game? Probably not. Look into keybinding libraries. Instead of a 'waitForKeyPress' you'll have a 'waitForAction', or something. Where the input handler will send an actionID to the player's entity for actuation.

Yes, this is a simple program that doesn't have such things. In that case, though, there's then essentially two different types of "time" running -- there's "real time" which determines the running graphics, and then there's "game time", which determines the time lapsing in the game world (and the events represent things happening in the game world). Real time elapses continuously (well, at the frame rate of the graphics being drawn), while "game time" elapses in a turn-wise manner. So it'd make sense to separate those two somehow.

The concern about UI menus is something I was wondering about, too, since in a bigger game I'd have menus.

You may also want to take advantage of idle processing while the player is waiting to input data. For example-- In complex AI calculations we might be able to improve the experience of the game if we're always crunching AI algorithms even while waiting for the player. Don't get carried away though- just do what makes sense.

An inputManager on top of the game will use keybindings and 'screen focus' to determine where to send the input.


Quote
Quote
Where should the handler for game events go? Not all events (EVT_QUIT is an example – there'd be others in a real game) involve an entity acting. So putting it on the entities doesn't seem to work – or should one have handlers for those kind of events on the entities, and handlers for the other in the game? But would putting it on the entities violate the component system? If so, what should we do? Have something to translate events to messages?

I think that component systems and event systems don't mesh very well. The whole point of components is to isolate where the components interact with one another. You shouldn't need messages, events, or handlers-- it should all just happen in the systems relevant to those components. If your systems are arranged in a meaningful Finite State Machine, you can pass information between systems by updating components. This doesn't mean you can't use them together-- just that when you mix them, it can be hard to tell what should go where.

Technically, EVT_QUIT could be handled by any entity- the most trivial example could be by freeing memory. Regardless, if you're going down the event path, you may want to create, at the very least, an event handling abstract base class for every object in the game to use. You can override what you need for any given object, and forget the rest. You can also create a hierarchy of abstractions to more reasonably organize handlers. You don't want to start defining classes for different entity types, just for different game objects to handle different events. This will allow you to handle events and use components in a happier way.

So what would better mesh with events? Also, the entity would free its memory when its destructor is called. When quit is fired, it means the game is shutting down, so this has to lead to an exit of the game loop.

What would better mesh with components, insofar as getting "things that happen in the game world" to go?

An event can be handled by more than one object. Or is my nomenclature off >_>. Maybe not handled by- but react to? Whatever the case, when an event occurs, anything with an onThatTypeOfEvent method should be called on relevant objects.

We may want to do something special on a particular entity for an onQuit event. What that is, who knows- but we should be able to do it if we want to.


Well- component systems do their own work themselves. The idea is that you store relevant data in components and specific system updates the components relative to one another. You don't need events, messaging, or any of that stuff- the systems immediately handle whatever messages/events would take place. In the case that they don't, we just go to another system.

For example-- our ControllerSystem polls the controller of an entity to figure out what heuristic should govern the next action. Then we might switch to a HeuristicAnalysisSystem that runs the heuristic algorithm to determine what action should take place.

This dude does a good job of breaking it down- I stumbled upon it recently. He has numerous articles on his blog that describe how it works, though I don't really like the way he did it (I flamed him for his implementation of state management >_<). I'd read them both for some insight on the what, how and why's of components. It's well-written and very clear.

http://www.richardlord.net/blog/what-is-an-entity-framework
http://www.richardlord.net/blog/why-use-an-entity-framework

Note that this isn't a 'pure' component system, just one that does a pretty good job of hybridizing classes and components in a way that is natural for OOP style.

Quote
Quote
Also, how would one make the part of the program that handles stuff like collisions/interactions between entities

All you have to do is create a collision event when a move takes place and something already occupies the space. You then call the respective onCollide handlers for those entities and determine what should happen. Should we attack? Pick up an object? Or bump senselessly into a wall? Perhaps we don't want the player to lose their turn if they bump into a wall-- etc. Entity doesn't need any information about the map-- it just acts as a way to store data for the player's avatar and for the player to interact with the game model.

Where is said onCollide handler? If it's in the entity object, then to make it do different things for different kinds of entity we need to make subclasses of Entity, which defeats the whole idea of the component system! In which case we might as well just scrap it and go for a traditional inheritance-based entity system.

And what spawns the collision event? It has to know about the movement the entity is going to make (which is determined by AI), the positions of all entities on the map, and a collision occurs.

And how would one do that wall-bump thing, anyway, when in the current setup the loop cycles through once the action event is handled, and another one is scheduled, which marks another turn?

That's how event systems work. I mean, I've never programmed one, but I've used many event-driven APIs, and that's how they're structured.

Event Object is produced and added to a queue. Event System then takes an event and evaluates it by calling whatever respective methods for that event exist on relevant entities (which may include more entities than those colliding).

You can handle collision either in a component way or an event way- or possibly both. The event way is to make your collisionEvent and then call onCollide variants in the entities.


Did you get the idea to use event systems on 3d gaming forums? We like event systems for real-time game engines because they allow us to hide a lot of really complicated shit but still code custom behaviors into our entities directly and expect it to work. If YOU are making the engine, event systems are, IMO, a PITA.

So... In a component system, game logic takes place in a system. In an event system, your events are handled in systems but the logic is relative to the implementation of an entity.

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
Nope, it isn't different.

In 3D games, the list of triangles for a mesh will be stored in specialized data structures (really just a list of vertices, normals, and UVs). We don't need to 'copy' this information to an object before rendering it. What if two entities had the same mesh? Why would you want to store multiple copies of the same mesh in memory? We can just store the reference to that mesh on an object (or really, just the key/index of that mesh in the asset list). It isn't quite that simple though- we also want to store transformation information, material data, and other things pertinent to that particular object. It's just that in the case of the actual asset, we never store it on an object. We store transformation information in the component and the mesh in the library.

However, then that means the render system deals with whole assets and not just graphical primitives, right? But how does that jibe with what the person on that gamedev.net site was saying?

  • AssetLibrary stores Data
  • GraphicsComponent stores references to AssetLibrary and other transformation information
  • RenderSystem stores a list of references to GraphicsComponents, iterates through and draws them to the screen accordingly.

It's awkward in a roguelike where there is only a single character to store, but you may want to do it anyway for creating consistency among different racial types- then you can localize color definitions within the graphics component or create another entry into your AssetLibrary for various colors/characters to be used in consistent ways.

And, I suppose, the RenderSystem also stores a copy of a pointer to the AssetLibrary so it can access it, right?


You may also want to take advantage of idle processing while the player is waiting to input data. For example-- In complex AI calculations we might be able to improve the experience of the game if we're always crunching AI algorithms even while waiting for the player. Don't get carried away though- just do what makes sense.

An inputManager on top of the game will use keybindings and 'screen focus' to determine where to send the input.

Doesn't the input manager then also have to know about all possible places it can send the input? Like it has to know about the pile of entities, so it can send input there. It needs to know about the windows on the display, so it can send input there. But isn't that making too many dependencies?

An event can be handled by more than one object. Or is my nomenclature off >_>. Maybe not handled by- but react to? Whatever the case, when an event occurs, anything with an onThatTypeOfEvent method should be called on relevant objects.

We may want to do something special on a particular entity for an onQuit event. What that is, who knows- but we should be able to do it if we want to.


Well- component systems do their own work themselves. The idea is that you store relevant data in components and specific system updates the components relative to one another. You don't need events, messaging, or any of that stuff- the systems immediately handle whatever messages/events would take place. In the case that they don't, we just go to another system.

For example-- our ControllerSystem polls the controller of an entity to figure out what heuristic should govern the next action. Then we might switch to a HeuristicAnalysisSystem that runs the heuristic algorithm to determine what action should take place.

This dude does a good job of breaking it down- I stumbled upon it recently. He has numerous articles on his blog that describe how it works, though I don't really like the way he did it (I flamed him for his implementation of state management >_<). I'd read them both for some insight on the what, how and why's of components. It's well-written and very clear.

http://www.richardlord.net/blog/what-is-an-entity-framework
http://www.richardlord.net/blog/why-use-an-entity-framework

Note that this isn't a 'pure' component system, just one that does a pretty good job of hybridizing classes and components in a way that is natural for OOP style.

Thanks for the links. I'll have a look at it.

That's how event systems work. I mean, I've never programmed one, but I've used many event-driven APIs, and that's how they're structured.

Event Object is produced and added to a queue. Event System then takes an event and evaluates it by calling whatever respective methods for that event exist on relevant entities (which may include more entities than those colliding).

You can handle collision either in a component way or an event way- or possibly both. The event way is to make your collisionEvent and then call onCollide variants in the entities.


Did you get the idea to use event systems on 3d gaming forums? We like event systems for real-time game engines because they allow us to hide a lot of really complicated shit but still code custom behaviors into our entities directly and expect it to work. If YOU are making the engine, event systems are, IMO, a PITA.

So... In a component system, game logic takes place in a system. In an event system, your events are handled in systems but the logic is relative to the implementation of an entity.

So then event systems are fundamentally tied to the "inheritance" method of construction, where you subclass entities to get different kinds of entities?

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
Was looking at the links you gave -- I notice the method described there has an "update" function on the systems and components, which seems it's designed for a real-time game. How does this change in a turn-based game like a roguelike?

george

  • Rogueliker
  • ***
  • Posts: 201
  • Karma: +1/-1
    • View Profile
    • Email
How you do updates in a turn-based game depends on your 'time' system. In the most basic I go you go (player takes their turn, then you iterate through all other things and they just take their turn in order of where they are in the list), you call the update on a thing when it's their turn.

I have to disagree that events/messaging and components don't mesh that well. They serve different purposes.

The biggest mistake with components IMO is to confuse game things with code objects. That leads you to the question like you had before of 'where do I put quit-game()'? Just because you have an entity system where entities are 'game things' does not mean that your source code objects are limited to describing just entities, their managers, and the systems that act on them.
« Last Edit: April 24, 2013, 01:41:13 AM by george »

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Quote
However, then that means the render system deals with whole assets and not just graphical primitives, right? But how does that jibe with what the person on that gamedev.net site was saying?

Assets ARE data structures of graphical primitives. They are a set of instructions to the Render System as to how this particular asset is produced out of graphical primitives. The supplementary data provided by the graphics component tells us what transformations are applied to that data before/after it's position is evaluated in real space*.

In other words, the Render System only deals with graphical primitives, but receives instructions in the form of assets and transformations. To be clear, a transformation is typically in the form of the traditional scale, translation, and rotation matrices (but there are others that we produce from a combination of these- like shearing and... stuff >_>).

* This can get a little confusing, because our components are supposed to be shared whenever possible. We don't want to put graphical transformations on a graphics component if that same information is used by logic (such as rotation and location). HOWEVER, sometimes we actually want the logical position and the mesh position to be different-- not often, but sometimes. This just gets into implementation details. The spirit of components remains the same.

Quote
And, I suppose, the RenderSystem also stores a copy of a pointer to the AssetLibrary so it can access it, right?

Yep! You CAN store the reference to the asset on the entity instead of the key for the library, but only processes related to Asset Loading should modify that reference. Most any specialized modifications or transformations of assets for a particular special entity are handled in its various graphics components.

Quote
Doesn't the input manager then also have to know about all possible places it can send the input? Like it has to know about the pile of entities, so it can send input there. It needs to know about the windows on the display, so it can send input there. But isn't that making too many dependencies?

We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

Quote
So then event systems are fundamentally tied to the "inheritance" method of construction, where you subclass entities to get different kinds of entities?

Yes, event systems are all about letting disjunct classes do all of the work through a messaging system. Component systems aren't that different, except that all messaging is isolated to relevant components in relevant systems- so there is no classical inheritance scheme. For game logic, it's all to accomplish the same thing, components just tend to give us a more transparent awareness and simplicity into specifying the functionality of our game.

Since classes do the logic in event systems, we can easily make design mistakes-- like, say, the application of a stun effect. Where should it go? Well, the instigator should send a message to another entity to be stunned. The recipient of the stun message then determines whether it should be stunned or not (or do something else). This represents a continuum of game logic that really doesn't need to be happening in two different places.


Nonetheless, we aren't tied into stupid hierarchies with event systems, we just have to use its power intelligently http://en.wikipedia.org/wiki/Composition_over_inheritance.

Quote
Was looking at the links you gave -- I notice the method described there has an "update" function on the systems and components, which seems it's designed for a real-time game. How does this change in a turn-based game like a roguelike?

His game engine isn't necessarily real-time, it's tick-based (by default, real-time ticks). All you would do, in the Controller System, where input/AI is mapped to actions, is just throw a 'waitForPlayerAction' in there and your game is now turn-based... well, almost. You need to update each entity per system, instead of update each system per entity.



Quote
The biggest mistake with components IMO is to confuse game things with code objects. That leads you to the question like you had before of 'where do I put quit-game()'? Just because you have an entity system where entities are 'game things' does not mean that your source code objects are limited to describing just entities, their managers, and the systems that act on them.

His source code uses an event system for component interactions. It's okay to have both for GAME LOGIC, but you need to be clear about what produces and handles events and what gets handled in systems. A global game message might be simpler to deal with as an event, but a global game message isn't typically going to effect entities that don't have relevant components.

But yea-- you need to make a distinction between Game Logic Events, Application Events, and possibly even System Events. You can structure an entire application as a component system, but the DOP is designed for data-driven logic, like a game.

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

But how does the window transfer the input to the relevant entity? Doesn't it need to know about entities to do that? Should that knowledge/dependency exist?

And doesn't any game that's more than trivial need UI menus?
« Last Edit: April 24, 2013, 10:26:47 PM by mike3 »

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

But how does the window transfer the input to the relevant entity? Doesn't it need to know about entities to do that? Should that knowledge/dependency exist?

And doesn't any game that's more than trivial need UI menus?


Yea- but the issue of UI menus, input and windows is basically a solved problem. There isn't a good reason to do it from scratch. Do you want to make a game, or do you want to make an application framework?


As far as windowing and stuff-- Basically, you pop a window open- the top level window is what catches input. A 'window' is just an abstraction to make a distinction between input focus (because the window also provides feedback, it will communicate with the renderer too).

Input ----> Window ---->  Logic ---> Renderer ----> [Window] <----- Input

So the window is an abstraction that facilitates the interfacing of input to logic, which then results in some kind of  rendering feedback.

The input manager will catch specific keys first-- like 'esc' to immediately exit or something. Then it will pass the input down to the window, which may also use it-- then it will call a method of the game object that corresponds to that key-binding.

I'm not explaining it well because I don't have too much experience programming these meta-application features. A framework should do all of this for you.

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
We don't send input to entities, we just send input to Screens, Frame, Windows, whatever you want to call them. This gets into window management-- but basically each different window may have some specialized keybindings for that window. The input manager simply maps key input to an action for whatever window has focus. In the case of actual gameplay, we aren't necessarily sending input directly to entities, but through the game window, where the action is delegated to the appropriate entity.

Honestly, it's better to learn why this is important by NOT doing it the first couple times around. Window management is a PITA that you should avoid until necessary, as it will keep you from actually working on your game.

But how does the window transfer the input to the relevant entity? Doesn't it need to know about entities to do that? Should that knowledge/dependency exist?

And doesn't any game that's more than trivial need UI menus?


Yea- but the issue of UI menus, input and windows is basically a solved problem. There isn't a good reason to do it from scratch. Do you want to make a game, or do you want to make an application framework?


As far as windowing and stuff-- Basically, you pop a window open- the top level window is what catches input. A 'window' is just an abstraction to make a distinction between input focus (because the window also provides feedback, it will communicate with the renderer too).

Input ----> Window ---->  Logic ---> Renderer ----> [Window] <----- Input

So the window is an abstraction that facilitates the interfacing of input to logic, which then results in some kind of  rendering feedback.

The input manager will catch specific keys first-- like 'esc' to immediately exit or something. Then it will pass the input down to the window, which may also use it-- then it will call a method of the game object that corresponds to that key-binding.

I'm not explaining it well because I don't have too much experience programming these meta-application features. A framework should do all of this for you.

Ah, "call a method on the game object". That's the part I wasn't sure about.  Thanks for that.

mike3

  • Rogueliker
  • ***
  • Posts: 125
  • Karma: +0/-0
    • View Profile
    • Email
Also, when you talk about "windows" and "UI menus" do you mean the OS's windows/menus, or menus/popups drawn inside the game? As if it's the latter, then what kind of "framework" would one use that would allow for easily providing the necessary render routines to render that stuff?

george

  • Rogueliker
  • ***
  • Posts: 201
  • Karma: +1/-1
    • View Profile
    • Email
You're using pyglet, right? If I recall correctly there are some UI frameworks written for pyglet that you could use. Check the mailing list and maybe Pyweek. (sorry, confused you with someone else  :P)

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Also, when you talk about "windows" and "UI menus" do you mean the OS's windows/menus, or menus/popups drawn inside the game? As if it's the latter, then what kind of "framework" would one use that would allow for easily providing the necessary render routines to render that stuff?

Okay--- This is where things can get tricky. Any application is designed to work with some rendering medium, which is windowed by a Window Manager.

A Window Manager, for an operating system, has implementation specific features that govern how it works. If you use the API for an operating system, you can create additional windows tied to your application but have window decorators. .NET, GTK, QT, and Java, for example, provide comprehensive UI frameworks for desktop integration of your window management. This allows us to manage each window from the desktop. We don't care about that crap for games, but its useful for applications.

Within a game, we may have a side-bar screen that shows player information, another screen that shows a minimap, and another that shows the player's position in the game world. All of these are separate screens or 'windows.' However, the focus is on the player's position screen. When we open our inventory, we open a new window, focus shifts, and now we're in 'inventory item selection'-mode, or something. Point is, we don't need a complex window managing system to make this look good in the game, we just need a concise one- typically something simple.

I don't have anything in mind if you're set on C++-- it really just depends on what APIs you're working with. I'm sure there are some good SDL/OpenGL ones out there, but I don't know about curses. Most curses programmers, afaik, just allot portions of the screen for specific data and may not bother with a window managing abstraction...

Now that I think about it further-- it may be better to do it yourself (sorry for going back on what I just said >_<). I mean-- all you have to do is create a way to manage which OBJECT in your application has input focus. This can be pretty simple for a roguelike. Press 'i' while on the game screen and the inventory screen pops up and input focus shifts. For an RL this shouldn't be too difficult to rationalize.

You can create a 'window' or 'screen' abstract base class which the input manager stores a reference to whichever is active. From there, it's like your standard finite state machine.