1
Programming / Nausicaa Engine
« on: July 26, 2013, 01:42:32 PM »
There seem to be some really competent devs around here, so I thought I'd use this as a place to get feedback, bounce ideas and generally talk about game engine development.
I'm currently developing an engine that I hope will be useful for most types of 2d games. I'll be using it for a few proofs of concept games, and for NausicaaRL, a roguelike based on Hayao Miyazaki's masterpiece.
I'm using Python 3.3 and PySFML 1.3, based on SFML2. The github repository is at
https://github.com/Anvilfolk/NausicaaRL
The main concepts I am adhering to while programming this are:
- Seperation of logic and rendering
- Event-driven communication
- Entity system implementation
- Process management rather than "update(); render(); input()" loop
To be honest, I have read about so many different concepts that my mix'n'match might not make sense a ton of sense at the moment. Either way, here's some base modules and explanations.
[/list]
There's lots of tidbits that still need to be tested and implemented all across the board, and in particular in the rendering, GUI and input parts. I feel it's getting closer to be usable as a game system, and I'm hoping to be able to implement a very basic game soon to make sure the engine actually makes things easier to implement.
I am still relatively unclear on how to handle input, both mouse and keyboard, however. My current system is that whenever a key gets pressed, it is converted into a GameEvent that contains all types of actions it may represent. For instance, pressing "e" can mean either "open equipment screen" or "select item (e)", or lots of other things. The problem is that this interpretation is global, so the GameEvent with the open equipment screen and select item (e) gets propagated everywhere, which doesn't make a lot of sense. I am thinking that each GUI element should have its own way to convert keypresses.
But what about mouse stuff? Do I make every sprite a "button" and add listeners to everything for selection? What about drag'n'drop? What about drag'n'drop between two GUI elements? These things still have me a little baffled.
Anyway, I just hope this will spark some interesting discussion like the one in the rendering question thread
In particular, I am not entirely convinced that the entity system for a game as complex as roguelikes will make components as decoupled as they were meant to be... and might actually make things harder, as you try to send off events and counter-events (that might or might not happen depending on what components exist in an entity) and stuff instead of just calling a function! Besides, it seems like we're always going to be explicitly asking whether components exist, which kind of beats the point :\
I'm currently developing an engine that I hope will be useful for most types of 2d games. I'll be using it for a few proofs of concept games, and for NausicaaRL, a roguelike based on Hayao Miyazaki's masterpiece.
I'm using Python 3.3 and PySFML 1.3, based on SFML2. The github repository is at
https://github.com/Anvilfolk/NausicaaRL
The main concepts I am adhering to while programming this are:
- Seperation of logic and rendering
- Event-driven communication
- Entity system implementation
- Process management rather than "update(); render(); input()" loop
To be honest, I have read about so many different concepts that my mix'n'match might not make sense a ton of sense at the moment. Either way, here's some base modules and explanations.
- Entities.py: contains Entities as aggregation of Components, and Components as data only. Systems are also here, and they track which entities have which components so that they can act on their data. There are two concerns here:
- Many entity systems contain a RenderableComponent inside an Entity, which I dislike since it means that the game logic is including some visual information. It would be hacky to then make a dedicated server that did not need to have any rendering information whatsoever. Therefore, there will be no renderable components. Rather, whenever a component is created, an Event will be called, and the visual representation of the game, if any, will be responsible for figuring out how to display it.
- Entity systems are typically used for real-time games. In a roguelike, you essentially have an event-driven system. I am thinking that the engine will have systems that can run in real time (they process all relevant entities every "frame"), and systems that simply work as event listeners to handle different things.
- Processes.py: Process and ProcessManager. A process is something that needs to execute over several frames, like animations, particle systems, AI, etc. It's fairly simple stuff. It runs, and sometimes it can be aborted, stopped, it ends, starts, pauses, etc.
- Events.py: there will be a singleton EventManager for ease of access, but many important classes, like a World, a Game, etc will subclass EventManager to allow local, more efficient communication. Events are currently dispatched according to their class. I am hoping that having lots of more localised EventManagers will make this simple system fast and robust enough, as there won't be lots of listeners that get irrelevant events.
- Game.py: manages GameStates. I am currently unsure whether this should be tied to the graphical representation or not. I feel a dedicated server will simply host the game itself, and not other states. This would essentially be a scene manager, for the main menu, game, score menu, etc. As such, I am thinking that the HumanView, the graphical representation of the game, will contain a GameManager, and each GameState will contain a GUIPane on which contents will be rendered.
- HumanView.py: Views are intended to be anything that receives events and presents a view of the game to anything. In this case the HumanView presents it to the user, so it manages the Window, GUI, etc. A RemoteView will likely receive all important events in the game world and send them to remote clients. It does not need to worry about visuals at all.
- nGUI.py: the GUI system, oh god, the GUI System. How much I hate this. It's so annoying to implement. Everything is either a NGUIBase or an NGUIPane (which extends NGUIBase). Panes may have more content. Everything has several properties that can be checked, like whether the mouse is within its bounds (independently of whether there is another element in front), and whether it has mouse focus, i.e. whether the mouse is on top of this, and nothing else "above it" is capturing it". NGUIBasicButtons, and most buttons in general, will also have a "primed" state, where the mouse has been clicked but not released. Upon losing focus, they also lose primeness. Everything is managed through its own EventSystem... I should perhaps integrate it, though right now the system is fairly intuitive. You can add listeners to every GUI component, and if certain methods exist, they will be called when appropriate (onMouseFocus, onMouseDefocus, onMouseDown, onMouseUp, etc).
- TextManager, ResourceManager, Utility
[/list]
There's lots of tidbits that still need to be tested and implemented all across the board, and in particular in the rendering, GUI and input parts. I feel it's getting closer to be usable as a game system, and I'm hoping to be able to implement a very basic game soon to make sure the engine actually makes things easier to implement.
I am still relatively unclear on how to handle input, both mouse and keyboard, however. My current system is that whenever a key gets pressed, it is converted into a GameEvent that contains all types of actions it may represent. For instance, pressing "e" can mean either "open equipment screen" or "select item (e)", or lots of other things. The problem is that this interpretation is global, so the GameEvent with the open equipment screen and select item (e) gets propagated everywhere, which doesn't make a lot of sense. I am thinking that each GUI element should have its own way to convert keypresses.
But what about mouse stuff? Do I make every sprite a "button" and add listeners to everything for selection? What about drag'n'drop? What about drag'n'drop between two GUI elements? These things still have me a little baffled.
Anyway, I just hope this will spark some interesting discussion like the one in the rendering question thread
In particular, I am not entirely convinced that the entity system for a game as complex as roguelikes will make components as decoupled as they were meant to be... and might actually make things harder, as you try to send off events and counter-events (that might or might not happen depending on what components exist in an entity) and stuff instead of just calling a function! Besides, it seems like we're always going to be explicitly asking whether components exist, which kind of beats the point :\