Author Topic: Nausicaa Engine  (Read 17457 times)

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Nausicaa Engine
« on: July 26, 2013, 01:42:32 PM »
There seem to be some really competent devs around here, so I thought I'd use this as a place to get feedback, bounce ideas and generally talk about game engine development.

I'm currently developing an engine that I hope will be useful for most types of 2d games. I'll be using it for a few proofs of concept games, and for NausicaaRL, a roguelike based on Hayao Miyazaki's masterpiece.

I'm using Python 3.3 and PySFML 1.3, based on SFML2. The github repository is at
https://github.com/Anvilfolk/NausicaaRL

The main concepts I am adhering to while programming this are:
- Seperation of logic and rendering
- Event-driven communication
- Entity system implementation
- Process management rather than "update(); render(); input()" loop

To be honest, I have read about so many different concepts that my mix'n'match might not make sense a ton of sense at the moment. Either way, here's some base modules and explanations.
  • Entities.py: contains Entities as aggregation of Components, and Components as data only. Systems are also here, and they track which entities have which components so that they can act on their data. There are two concerns here:
    • Many entity systems contain a RenderableComponent inside an Entity, which I dislike since it means that the game logic is including some visual information. It would be hacky to then make a dedicated server that did not need to have any rendering information whatsoever. Therefore, there will be no renderable components. Rather, whenever a component is created, an Event will be called, and the visual representation of the game, if any, will be responsible for figuring out how to display it.
    • Entity systems are typically used for real-time games. In a roguelike, you essentially have an event-driven system. I am thinking that the engine will have systems that can run in real time (they process all relevant entities every "frame"), and systems that simply work as event listeners to handle different things.
    There is also a World class that will contain the entities, and manage systems, entities, components, etc.
  • Processes.py: Process and ProcessManager. A process is something that needs to execute over several frames, like animations, particle systems, AI, etc. It's fairly simple stuff. It runs, and sometimes it can be aborted, stopped, it ends, starts, pauses, etc.
  • Events.py: there will be a singleton EventManager for ease of access, but many important classes, like a World, a Game, etc will subclass EventManager to allow local, more efficient communication. Events are currently dispatched according to their class. I am hoping that having lots of more localised EventManagers will make this simple system fast and robust enough, as there won't be lots of listeners that get irrelevant events.
  • Game.py: manages GameStates. I am currently unsure whether this should be tied to the graphical representation or not. I feel a dedicated server will simply host the game itself, and not other states. This would essentially be a scene manager, for the main menu, game, score menu, etc. As such, I am thinking that the HumanView, the graphical representation of the game, will contain a GameManager, and each GameState will contain a GUIPane on which contents will be rendered.
  • HumanView.py: Views are intended to be anything that receives events and presents a view of the game to anything. In this case the HumanView presents it to the user, so it manages the Window, GUI, etc. A RemoteView will likely receive all important events in the game world and send them to remote clients. It does not need to worry about visuals at all.
  • nGUI.py: the GUI system, oh god, the GUI System. How much I hate this. It's so annoying to implement. Everything is either a NGUIBase or an NGUIPane (which extends NGUIBase). Panes may have more content. Everything has several properties that can be checked, like whether the mouse is within its bounds (independently of whether there is another element in front), and whether it has mouse focus, i.e. whether the mouse is on top of this, and nothing else "above it" is capturing it". NGUIBasicButtons, and most buttons in general, will also have a "primed" state, where the mouse has been clicked but not released. Upon losing focus, they also lose primeness. Everything is managed through its own EventSystem... I should perhaps integrate it, though right now the system is fairly intuitive. You can add listeners to every GUI component, and if certain methods exist, they will be called when appropriate (onMouseFocus, onMouseDefocus, onMouseDown, onMouseUp, etc).
  • TextManager, ResourceManager, Utility
can be used to load resources such as text, data, fonts and textures to be rendered.
[/list]

There's lots of tidbits that still need to be tested and implemented all across the board, and in particular in the rendering, GUI and input parts. I feel it's getting closer to be usable as a game system, and I'm hoping to be able to implement a very basic game soon to make sure the engine actually makes things easier to implement.

I am still relatively unclear on how to handle input, both mouse and keyboard, however.  My current system is that whenever a key gets pressed, it is converted into a GameEvent that contains all types of actions it may represent. For instance, pressing "e" can mean either "open equipment screen" or "select item (e)", or lots of other things. The problem is that this interpretation is global, so the GameEvent with the open equipment screen and select item (e) gets propagated everywhere, which doesn't make a lot of sense. I am thinking that each GUI element should have its own way to convert keypresses.

But what about mouse stuff? Do I make every sprite a "button" and add listeners to everything for selection? What about drag'n'drop? What about drag'n'drop between two GUI elements? These things still have me a little baffled.

Anyway, I just hope this will spark some interesting discussion like the one in the rendering question thread :)

In particular, I am not entirely convinced that the entity system for a game as complex as roguelikes will make components as decoupled as they were meant to be... and might actually make things harder, as you try to send off events and counter-events (that might or might not happen depending on what components exist in an entity) and stuff instead of just calling a function! Besides, it seems like we're always going to be explicitly asking whether components exist, which kind of beats the point :\
« Last Edit: July 26, 2013, 01:44:42 PM by Anvilfolk »
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

Trystan

  • Rogueliker
  • ***
  • Posts: 164
  • Karma: +0/-0
    • View Profile
    • my blog
Re: Nausicaa Engine
« Reply #1 on: July 26, 2013, 11:49:49 PM »
Neat stuff. I'm not sure how well the entity and component system will work out since roguelikes tend to have systems that all mess with each other. Going totally anti-encapulation and treating creatures as just a bunch of public flags and counters is the only way that I've been able to make something that doesn't feel like I'm trying to work around the system but I applaud the effort. It will be cool to see how it works out. I assume you'll use this on any future 7DRLs?

I am still relatively unclear on how to handle input, both mouse and keyboard, however.  My current system is that whenever a key gets pressed, it is converted into a GameEvent that contains all types of actions it may represent. For instance, pressing "e" can mean either "open equipment screen" or "select item (e)", or lots of other things. The problem is that this interpretation is global, so the GameEvent with the open equipment screen and select item (e) gets propagated everywhere, which doesn't make a lot of sense. I am thinking that each GUI element should have its own way to convert keypresses.

But what about mouse stuff? Do I make every sprite a "button" and add listeners to everything for selection? What about drag'n'drop? What about drag'n'drop between two GUI elements? These things still have me a little baffled.

What I've found useful in my projects is that each screen can register callback functions like in your system or specify what to translate the event into (which you could do with a callback that just publishes a new event but I like having this simple convenience). There's also a single RL object that effects all screens in the game. My events aren't really full classes - just a string and optional data like in jQuery.

So in my main function I tell the framework to create another game event for certain keypresses:
Code: [Select]
rl.bind('a', 'left');
rl.bind('d', 'right');
rl.bind('w', 'up');
rl.bind('s', 'down');
rl.bind('h', 'left');
rl.bind('j', 'down');
rl.bind('k', 'up');
rl.bind('l', 'right');

And the screen on the screenstack will get both events; the keypress and game event. Each screen can do whatever makes sense or ignore it.
Code: [Select]
bind('down left', function():void { callback(player, -1, 1); exit(); } );
bind('down right', function():void { callback(player, 1, 1); exit(); } );
bind('enter', function():void { switchTo(new PlayScreen()); } );
bind('escape', exit );
bind('draw', draw );
I this example the 'enter' key would switch to the PlayScreen but only if this is the current screen. So the game tells the framework what the keys mean.

The ActionScript KeyboardEvent is passed as extra data to the callbacks too. If the callback takes a parameter then it can look at the low level details of the event (was control pressed, is it key down or key up, etc) and handle that. Most of the time that's not needed so it's easy for the most common case and possible for the more complex ones.

MouseEvents will work the same way once I implement them.

I've found this to be flexible and extensible while also being easy to use and understand.
« Last Edit: July 27, 2013, 01:37:42 AM by Trystan »

Trystan

  • Rogueliker
  • ***
  • Posts: 164
  • Karma: +0/-0
    • View Profile
    • my blog
Re: Nausicaa Engine
« Reply #2 on: July 27, 2013, 12:23:33 AM »
What about drag'n'drop? What about drag'n'drop between two GUI elements? These things still have me a little baffled.

For drag and drop, Perhaps you can add DragNDropStart, DragNDropContinue, and DragNDropEnd events? They'd be based on either the events from PyGame (I don't know if PyGame already creates those events) or you can look for Mouse events to create the DragNDrop events. They could contain the start and current mouse positions.

So, for example, the view would get a DragNDropEnd event saying that something was dragged from 5,5 to 25,10 and it would know how to recognize that the user was dragging an item from their inventory to the ground (or whatever). The view may have subviews that help or it may do all the logic itself.

Now that I think about it - the sequence for DragNDrop events would be the same as for a selection tool. Click, Drag, and Release to select a region or Click, Drag, and Release to drag and drop. Maybe the drag and drop events are game specific events instead of simple input events. Interesting.

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #3 on: July 27, 2013, 08:12:13 AM »
Thanks for the input (hah, no pun intended)! That actually makes a lot of sense. I think as far as my HumanView and GUI is going to be concerned, input is still going to be input events - not game events. So I'll be clicking places, opening screens, and the main screen will eventually capture (e), open the inventory screen and so forth. What the inventory screen is going to do is simply launch an EquipEvent, which an EquipmentSystem will consume.

I think this could probably work!

I'm still not entirely sure I agree that drag'n'drop are game events. I feel they are still GUI events, but the problem is that upon a mouse-down, you need to check whether it's actually mouseDown or a mouseDrag event... so I might need to wait, say, 0.25s until I send the mouse-down event. If in the meantime the button has not been released, then I start a mouseDrag event?
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #4 on: July 27, 2013, 04:53:59 PM »
Is there a reason why you've implemented components and entities as classes? It doesn't seem pythonic.

Event systems at the GAME level can easily run counter to entity-based component systems. Each system is essentially a pure function that, given a game state, acts upon that state in some particular way. Application Events are wonderful, but Game level events with components really shouldn't be necessary. The whole idea of components is that only relevant elements of entities interact with one another. At what point should events ever broadcast between systems? They really shouldn't. You can modify the state of an entity so that it will be acted upon in a particular way by a subsequent system, but there is no event objectification here. You should be sending messages to other systems through the game state as a change of the game state. There are plenty of hybrid approaches, but if you're going close to pure (meaning your entity has no properties apart from components), you might want to delay the development of game events, as you will likely find that they aren't necessary. If it's simpler for you, of course, go for it. Do what feels right.


Quote
Thanks for the input (hah, no pun intended)! That actually makes a lot of sense. I think as far as my HumanView and GUI is going to be concerned, input is still going to be input events - not game events. So I'll be clicking places, opening screens, and the main screen will eventually capture (e), open the inventory screen and so forth. What the inventory screen is going to do is simply launch an EquipEvent, which an EquipmentSystem will consume.

I'm still not entirely sure I agree that drag'n'drop are game events. I feel they are still GUI events, but the problem is that upon a mouse-down, you need to check whether it's actually mouseDown or a mouseDrag event... so I might need to wait, say, 0.25s until I send the mouse-down event. If in the meantime the button has not been released, then I start a mouseDrag event?

Don't put a delay in for dragging. If the mouse is down when the mouse moves, it's a drag, otherwise it isn't. In 99% of all cases where you could either drag or click, there will only be one meaningful form of input-- as long as your GUI is designed well, you shouldn't need to obfuscate the difference.

No input should ever be a 'game' event. The 'game' is sent input through the GUI. The GUI is the I/O manager. It's whole function is to handle input events and provide feedback. When your UI catches an input bound to a game action, then you can do a game event.

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #5 on: July 27, 2013, 05:55:57 PM »
Thanks!

This is exactly the kind of discussion I was hoping to get. What you say makes perfect sense, though I'm thinking of a corner case between game events and gameplay events. I'm going to go through the execution flow of equipping an item, which should help me get a clearer idea of how this is going to work, and whether events make sense here.

  • Press "e" key for equipment.
  • Go through the GUI elements in postfix ordering. None of the sidepanels (health bar, message box, etc) capture this, but the game display GUIPanel does (This panel will also capture arrow keys and convert them into movement commands).
  • The game display GUIPanel accesses the HumanView and creates a new equipment screen, based on the Hero's inventory and equipment components.
  • The GUI would then send an EquipEvent with all required data, and an EquippingSystem would receive it and do its magic. Also, the message box GUI element would also receive this event and write out a message saying you are equipping said item.

How would you implement this without events? I feel you'd need to have the GUI equipment screen explicitly keep track of the equipment system and of the message box, or have game logic within the GUI. On the other hand, what happens if you try to equip something that's too heavy? You send the EquipEvent, but then that's supposed to fail... should the EquippingSystem abort the event? Should there be EquipSucess or EquipFail events? On EquipSuccess, it might make sense for an EquipBonusSystem to update armour values, etc etc in some other component?

I'm also aware that I might be trying to fit a cube into a round hole here, and that roguelikes don't make a lot of sense in this case...


Also, what do you mean about using classes for components/entities being unpythonic? How else would you do it?


Right now I'm struggling with the usual trying to make everything ultra customisable through XML files, which is bogging me down. I'm going to forget about that for a little while and hardcode a few things, and once I realise how stuff is actually going to run, I'll try to make it customisable :)
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #6 on: July 27, 2013, 09:11:37 PM »
  • Press "e" key for equipment.
  • Go through the GUI elements in postfix ordering. None of the sidepanels (health bar, message box, etc) capture this, but the game display GUIPanel does (This panel will also capture arrow keys and convert them into movement commands).
  • The game display GUIPanel accesses the HumanView and creates a new equipment screen, based on the Hero's inventory and equipment components.
  • The GUI would then send an EquipEvent with all required data, and an EquippingSystem would receive it and do its magic. Also, the message box GUI element would also receive this event and write out a message saying you are equipping said item.

How would you implement this without events? I feel you'd need to have the GUI equipment screen explicitly keep track of the equipment system and of the message box, or have game logic within the GUI. On the other hand, what happens if you try to equip something that's too heavy? You send the EquipEvent, but then that's supposed to fail... should the EquippingSystem abort the event? Should there be EquipSucess or EquipFail events? On EquipSuccess, it might make sense for an EquipBonusSystem to update armour values, etc etc in some other component?

  • The ControllerSystem tells the Application that it is ready to receive an action via human input.
  • Player inputs command, which is first captured at the application level.
  • If it maps to a potential game action, we open a UI screen that prompts the user to fill out the parameters for that action (or not, if there are no parameters).
  • Once the action has been properly prepared (or for each subsequent selection), we query the gamestate to determine if the action (or current selection) is valid (the UI may pre-empt validity tests, such as graying out options).
  • The query will reply with information regarding the action's validity and create a UI event, to be caught and handled however/wherever.
  • The validated action is then returned to the ControllerSystem, which then modifies a corresponding component to get acted upon in a relevant way by some other system.


The UI is sending a direct and explicit message to the game- there is no event taking place. All the UI is doing, relative to game logic, is mapping some sequence of input to an entity's action. There is no event created when you tell the entity to perform some action- it's a direct request (message) to change the game state. All the UI is used for is filling out the function parameters, so to speak. Now, it's appropriate to keep the UI interchangeable, so the game should produce UI events that can be caught and handled in whatever way they want, but that's a one-way relationship.

Quote
Also, what do you mean about using classes for components/entities being unpythonic? How else would you do it?

Data structures. The entire game state could just be a data structure whereby each entity is a key and each component is another data structure. You don't need types or functions in either entities or components so you shouldn't use classes. Your game state is just a bunch of nested dicts and your systems are just pure functions acting on those dicts.

A game is just a database with processes and a UI.

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #7 on: July 27, 2013, 11:14:23 PM »
Where does the query code and modification code reside, and how does the UI have access to it? It's definitely logic code. I assume we can put it in a System, even if it's not the "usual" kind that runs every step. Does the application/GUI layer know about the System being used? I guess what I'm saying is that I somewhat dislike that the application layer is so aware of the game's internals :)

I wonder if there is some nice halfway solution, where the application layer is a bit more decoupled from the game logic itself, but you keep this querying, which is definitely less involved than a bajillion events going all over the place for the smallest action...



Ooooh, using a plain dict would actually be really lightweight and cool! Do you have any particular opinion on the relative merits of Entity Systems over just Component-based Entities (which allow game logic within components/entities)? I feel those would be better adapted to games that aren't real-time. At least the class-based design may accommodates both?

I think I'm trying to be a little bit too overly general :(



Also, if you don't mind me asking - where and how did you learn all of these these things? You seem to have very clear ideas on most of these issues, which is great :)
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

requerent

  • Rogueliker
  • ***
  • Posts: 355
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #8 on: July 28, 2013, 08:08:02 PM »
Good question, maybe something like this? Just wrote this up on a whim.


Code: [Select]
======== Application Layer =======

Manager
- Logically manages UI elements relative to a given schema.
- Useful for:
Manages the state of the application
Initial propagator or consumer of input
Propagates UI events
Functions as a window manager

Scheme
- The type of UI to render and input to receive, a logical I/O context.
- Useful for:
Logically unify platform backends (Mobile, Web, PC etc)
Typically separate schema for input and output
Captures platform specific input or propagates to the application
Determines how panes are rendered and interacted with

Platform
- The actual I/O context, platform specific backend.
- Useful for:
Deploying on multiple platforms (DirectX, SDL, Console, iOS etc)
Abstracting differences between platforms
Simplifies porting of applications

Pane
- A GUI element that communicates feedback to the user.
- Useful for:
Managing discrete I/O interactions
Abstracting different types of UI with content
Capturing UI events


======== Game Layer ========
Model
- A pure data representation of the game, a game state.
- Useful for:
searching the state tree for AI
serialization for saves and replays
replication for networking

System
- A stateless and pure function that evaluates the game state, modifying it agnostically in some way.
- Useful for:
Isolation of logical interactions
Test-driven development
Maintaining fidelity amongst interactions

Entity
- A key for a list of components stored in a database as represented by a game state.
- Useful for:
Simplification of interactions
No limiting inheritance hierarchy
No confusion as to how entities should communicate
A configuration of components IS the "gameplay entity," the 'key' doesn't matter!
Simplified state management

Component
- A set of isolate data corresponding to some System(s) aggregated as an entity.
- Useful for:
Isolating salient interactive elements
See 'System'

Ancillary
- Object used by system(s) with which to communicate to the application layer.
- Useful for:
Propagate events to the UI for arbitrary feedback
Query the game state to evaluate the validity of a mapped action
Manage special feedback mechanisms, ie Cameras (which are part of the UI)
Manage special effects that may update per frame, rather than with game logic
Manage external libs, such as an oop physics engine (a system writes changes to the state)
May not be stateless, but game logic shouldn't depend on its state
Also used by the application to serialize the state
Manage networking and other forms of communication

The idea is pretty straightforward. Logically isolate everything as much as possible such that it is trivial to replace, remove, or reconfigure any aspect of the program.

The "Game" is just a set of stateless systems that act upon and modify the Model, or a game state. Game states are easily hashed, serialized, replicated and otherwise insanely useful for data management (such as networking, saving, replays, etc). The Model, of course, is just a database containing entities which are just keys for sets of components that represent a salient actor within the game. The Model will also contain some other details, like the current tick and the seed.

Some of our logic may depend on foreign libraries that don't immediately seem to fit into our wonderful world of stateless systems and pure data game. For these, a system may defer logic to some foreign process. For example, say we're using Bullet. Bullet is very OOP, isn't easy to serialize, and isn't stateless. We can use an ancillary object to manage the physics simulation, but we're still writing data to/from this ancillary via a system. Particle effect management, sounds, cameras and some other logically arbitrary but essential feedback features that have logical components can also be managed by ancillaries. Ancillary comes from the latin ancilla, which means slave girl. These bitches also send UI events to our application so that it knows when to draw things. The UI can use an ancillary to query the game state to determine whether an action is valid or not. These gals really just manage how the logic of the game communicates with the UI. The nature of the component-based system will necessarily promote the isolation of ancillary features into separate and unique objects- so it's kinda convenient.

Now, how does an ancillary actually query the game to determine if an action is valid? If the relevant system with which we are querying for is broken up into subroutines, we can easily use some of those routines to check it out. We can even duplicate the game state, run it through the system with the action appended and check to see what sort of UI event that system throws to another ancillary. It really just depends-- but it should probably be a chunk of specialized logic to test the validity. You may not also 'need' a querying technique. For example-- if you walk north but there is a wall and don't want the player to lose a turn, the system just bubbles a UI Event (via an ancillary) and returns to the system responsible for catching the action. If you equip something in a place that isn't valid, some checks should fail and the system can throw a UI event. The main issue here is that we want to know pre-emptively if it will fail so that we can more directly communicate to the player what they're doing wrong; however, if somehow a command gets through that isn't valid, the System shouldn't allow it. The ancillary cannot be performing game logic, it's purely a UI greaser.


So now we get to the actual application. First we have the platform we're running on. The Platform is represented by a set of interfaces filled out by native code. SDL, for example, will do the actual drawing, but we don't want to make calls to SDL because we want the freedom to drop, say, DirectX or even WebGL in as a backend. Easy, just fill out the interface and link for distribution on that platform. The logic of our UI, Application, and of our Game are not at all influenced by the platform.

Schema are logically distinct UI systems. Say our graphical interface is tiled like a traditional Console. That is a logical object, not a graphical or platform dependent one. The Logical console consumes drawing requests and sends them to some platform for rendering via an interface. You'll typically only have one input scheme, since the input needs to be rationalized by the platform code first, and all the input scheme will do is provide a mapping from platform to application actions (or rather, communicate platform 'intent' to the application). The output scheme could vary between desktop and mobile (which needs to be optimized for a small screen), but if you design it with mobile in mind, the differences could be really marginal.

From here we have a Manager to deal with Panes, focus, and determining how and where to send input. Panes contain formatting information that is drawn to the screen in a manner that is consistent with the logical output scheme. You probably want any pane capable of reacting to any UI event. For example, a camera pane may flash red when you're avatar is struck by an enemy, just as the health bar will tick down and the message log updated. These can all use the same UI event to update, just interpreted in different ways.

Panes serve two purposes relative to the game logic, helping the user format input in a way that an ancillary can use to validate, and evaluating UI events sent from ancillaries to update the screen.


And that's more or less how I'd structure a video game application. The only events that really occur are UI events, because everything else is communicated directly to where it needs to go. Input is passed from platform, to application, to logic, but I don't know if I would call that an event exactly. You could do input events also, but typically only the focus pane or the manager will be catching it, so you can just let the manager defer the input command.

Quote
Where does the query code and modification code reside, and how does the UI have access to it? It's definitely logic code. I assume we can put it in a System, even if it's not the "usual" kind that runs every step. Does the application/GUI layer know about the System being used? I guess what I'm saying is that I somewhat dislike that the application layer is so aware of the game's internals

Yea, the application doesn't need to know anything about the game. Input is always a logic activity, but since input is meaningless without feedback and may target the application first, the application needs to decide what to do. Don't get overzealous with systems though, they're purpose is to modify game logic, nothing more. If something sits between the UI and the game state, like a query, then create a managed and isolated area for that to take place (an ancillary).

It's also not inappropriate to manage Systems as an FSM instead of iterating over each of them for every game action. In this way, each system is a sub-routine that can be switched to if the prior system necessitates it. For example, the ticking system that progresses the state of the simulation forward may defer to subroutines to handle gas and fire propagation, and other things like that.

Also, you can make your entire application use a component based entity system, but I haven't considered a way to rationalize that. In a way, Panes are like entities and Widgets are like components, but the states of these objects aren't of logical importance, so it's kind of pointless. Use OOP to run your application, and CES to manage your game logic.

Quote
I wonder if there is some nice halfway solution, where the application layer is a bit more decoupled from the game logic itself, but you keep this querying, which is definitely less involved than a bajillion events going all over the place for the smallest action...

KISS. Hopefully you can see how an Ancillary object could figure this out in an isolated manner.

Quote
Ooooh, using a plain dict would actually be really lightweight and cool! Do you have any particular opinion on the relative merits of Entity Systems over just Component-based Entities (which allow game logic within components/entities)? I feel those would be better adapted to games that aren't real-time. At least the class-based design may accommodates both?

All game engines (that matter) use hybrid systems as opposed to pure OOP or pure CES.

An entity is a base class with virtual event handlers for every possible game event.
Components can be applied to entities, which are then managed by a system.
The systems modify relevant components and produce events for entities to handle if they want.


While this is VERY easy to understand and use, it results in HUGE amount of obfuscation. Why? Because it can be very confusing where game logic should actively reside, and it can be difficult to find where game logic is actually taking place. You also lose the ability to serialize and replicate without extra work, and you sacrifice the fidelity of game states, making it difficult to search the state space. If you're working in a group, it can be tedious to manage where logic should actually go.

Quote
Also, if you don't mind me asking - where and how did you learn all of these these things? You seem to have very clear ideas on most of these issues, which is great

Uhh... I guess when I first taught myself HaXe, it forced me to think about programming in a more modular way. HaXe is a language designed to translate its code into code for many platforms (C++, Java, JavaScript/HTML5, AS3, emscripten, C#/Mono, etc). While this is wonderful for logic, there is always platform backend specific issues that you have to keep in mind. I just begun naturally thinking about how to make all of the elements of my programs interchangeable, so it would be easy to drop-in different renderers or input schema.

I've also worked with some of the bigger game engines, so I've had some opportunities, out of necessity in some cases, to extrapolate their overall design. They are functional, but I don't like the overall approach. Independently I began researching CES, but there aren't very many programming paradigms that allow you to utilize it as efficiently as possible- specifically, you're just pretending to be data-driven while actually in an OOP paradigm. A lot of tutorials and documentation talks about Components and Entities as classes, but I think this is wrong because now every single type of component needs a specialized hashing method, extra work to register them with systems, which now also must be objects, which makes it less obvious what your memory footprint will be-- it's just... like programming a GUI system, which every programmer should hate >_<.


Anyhow-- the question got me thinking about what I actually do, because I haven't really thought about it explicitly before. Other people will have their own approaches, just find what makes sense and works for you.

Trystan

  • Rogueliker
  • ***
  • Posts: 164
  • Karma: +0/-0
    • View Profile
    • my blog
Re: Nausicaa Engine
« Reply #9 on: July 29, 2013, 12:00:00 AM »
There's a lot of talk of systems and managers here. Some folks really like that but in my experience, when I try to start with that or work with someone who has a "systems first" mindset, a lot of time gets spent thinking and typing but not much seems to happen. Another downside is the code for the game/app/website/whatever ends up being 75% coordinating different systems and only 25% relevant to the domain. Compare everything from the early Java community to everything from the recent Ruby community.

My general advice is to solve the problem at hand then abstract later. It's almost always easier to extract and reuse simple and working code than it is to start with grand abstractions that potentially solve all problems and try to make that work. Of course part of the fun of programming roguelikes is trying new things.

I'd really like to see some roguelike code that follows the principals that requerent has been talking about. I think it would be quite informative to all of us. My most recent roguelike code can be seen at https://github.com/trystan/PugnaciousWizards2/tree/master/src. The "knave" folder has the frameworky stuff and the "src/screens/PlayScreen.as" is similar to your ExploringState class. Other than how I'm handling animations, I'm pretty happy with how my knave framework is turning out.

Krice

  • (Banned)
  • Rogueliker
  • ***
  • Posts: 2316
  • Karma: +0/-2
    • View Profile
    • Email
Re: Nausicaa Engine
« Reply #10 on: July 29, 2013, 07:09:32 AM »
The question is has anyone ever made an engine that has been used in more than one game without any modifications? I think a generic framework is ok, but after that the creation of game itself should be more important than the engine. We have seen attempts to create a roguelike engine...

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #11 on: July 29, 2013, 08:28:50 PM »
Again, some really nice input! Thanks a ton!

I don't have a good time at all programming the graphics stuff, so I doubt I will separate too much of it. Either way, PySFML is available for a few platforms, and that takes care of that for me :)

I think right now my ancillaries are the Processes and perhaps the way I'm thinking of using some systems. All of the game related logic will be in systems, working as either event handlers/functional objects, that might or might not get called every game tick, and Processes are going to be more general and application oriented. They might handle such things as animations, checking quests (although this is a little bit more game logicky.....).



Yeah, Trystan, I've been reading and coding non-game stuff for quite a while, and it's starting to weigh on me. Though I really feel that this is going to help me structure every game from now on - not that I've ever completed one ;)

Krice: from my limited experience, you can really work on exposing many things. I think King Arthur's Gold is starting to do it, exposing most logic for most items available in the game, though I'm not entirely sure. At this moment, allowing players to define animations and unit data for a 2d game is relatively easy, though you still need to hardcode a lot of effects. Still, you can essentially make games with the same mechanics, but completely different themes.

Other than that, entity systems are nice because for many 2d games you really can reuse components and systems without having to worry. All of the physics-related components, for instance, can be reused. If you have a flexible enough system for drawing things to the screen, a lot of the graphics code also does not need to be redone.

But if you go from a roguelike to a 2d platformer, then sure, a lot of code is going to have to be rewritten.
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #12 on: July 31, 2013, 09:57:00 PM »
Some little bit of visual progress, finally!

This is using a subclassed GameManager called Planet 5521, which is the name for the crappy simple game I'm going to try to use as a showcase for this. GameStates are using a ProcessManager to which you attach processes. These define your main loop in each game state. In this case, I have a process which takes care of input (which propagates to the HumanView), and tells the HumanView to draw.

GameStates do not necessarily know about graphics. I want to be able to use GameManagers and GameStates for non-visual state. The MainMenuState receives the HumanView and manipulates its content pane directly to add a NGUISprite for the background, and two NGUIBasicButtons for the choices.

Next up is making the actual game state. I am thinking that all of the game's representation is actually going to be made out of GUI components. Units will simple GUI Sprites with some extra information. This should allow me to get all of the mouse events easily, which is pretty cool. I'm kind of worried that as the unit count gets too big, traversing all of the GUI tree every time the mouse moves might get pretty terrible, but we'll see.

I'm still wondering how exactly I'm going to handle the separation of model and view. In particular, I'm going to have units walking around, and they might have different weapons. I need a way to specify where the weapon is going to be help for every frame of the moving animation, where it's going to fire from, where it's pointing, etc. Part of this is going in the model, part of this is going in the view itself, and I'm just not sure how to work this out. Needs a little more thought :)

Code on GitHub has been updated.
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy

Endorya

  • Rogueliker
  • ***
  • Posts: 513
  • Karma: +0/-0
  • The non-purist roguelike lover
    • View Profile
    • Email
Re: Nausicaa Engine
« Reply #13 on: August 01, 2013, 11:45:55 PM »
I'm just going to say something in the area I feel comfortable at, as python is not my beach.
If not done already, you should center the text buttons and title label on the screen and use Avast Antivirus instead AVG ;)

"You are never alone. Death is always near watching you."

Anvilfolk

  • Rogueliker
  • ***
  • Posts: 374
  • Karma: +0/-0
    • View Profile
Re: Nausicaa Engine
« Reply #14 on: August 02, 2013, 08:47:01 AM »
Hahahahah, that's totally programmer art, to be honest. The sad part is it probably won't get better since I'm hopeless. Though by eventually making units and graphics fully accessible and moddable, I hope something neat might come out of it eventually :)
"Get it hot! Hit it harder!!!"
 - The tutor warcry

One of They Who Are Too Busy