Temple of The Roguelike Forums
Development => Programming => Topic started by: mike3 on July 14, 2013, 06:39:02 AM
-
Hi.
I was wondering about this: When drawing the level map in a roguelike, there seems to be two things one can do:
1. draw the entire map every frame
2. draw only what's "changed".
In the case of a simple text-based one, which is better? It would seem 2) would be faster, but also at the cost of added code complexity (you need to notify the rendering system of any and all changes to the map somehow). Does this matter on modern machines or is it just not worth the added complexity now? How do you do this in your games?
And what happens when you get up to graphics like with tiles and animated tiles? Or with "chase" (is that the right term?) scrolling where the scrolling follows the player as they move around (which'd seem to necessitate a full redraw anyways since everything on the screen moves)?
-
In general, only deal with performance when it becomes a problem and only add code complexity when absolutely necessary.
-
How do you do this in your games
I'm using SDL with software drawing mode so it's useful to draw and update only areas that are changed. I'm dividing update areas in few main ones like gameview, side stats and message output. When something is changed the GUI gets a note on the update area and it's updated per turn or flushing when needed. For me this simple technique also came a bit late, since I was just happily drawing stuff without realizing that some stats gets updated frequently and can trigger multiple updates on the same area which is slow.
With SDL's software drawing it's still good not to update the entire screen per turn, because it is noticeably slower. It can be different with opengl or other hardware drawing though.
-
It really depends on how heavy your rendering work becomes but it is good practice in general to render only what is needed. If you have real time animated tiles you will probably have to render the whole scene constantly. If your world only changes when the player ends a turn, you will probably render the whole world just once per turn.
In any case, I advise you to first see how performance plays while rendering everything and you should do this in a computer bellow average in terms of processing power. As Trystan advised, roam only towards code complexity when really necessary.
-
In most games there are situations when there are a lot of changes and you are basically updating everything, so you need to handle that anyway. It's best to optimize the worst case scenarios, if you're at it.
-
Just pick something that supports hardware acceleration. All 3d games redraw the entire screen every single frame, so graphics cards are used to it - and we're talking millions of triangles. I'm pretty sure you won't quite get there with roguelikes :)
Krice: Perhaps check out SFML instead of SDL. It supports windowed hardware acceleration, and is also a little more high level and easy to use.
Overall, make sure you're working with the right tools and libraries, and it shouldn't be a problem!
-
Just pick something that supports hardware acceleration. All 3d games redraw the entire screen every single frame, so graphics cards are used to it - and we're talking millions of triangles. I'm pretty sure you won't quite get there with roguelikes
You might be surprised how fast poorly written rendering code will bog a card down. Also, how well you write your rendering logic will determine how portable your application is.
Krice: Perhaps check out SFML instead of SDL. It supports windowed hardware acceleration, and is also a little more high level and easy to use.
I think he's using software drawing mode on purpose, as SDL can use opengl for hardware acceleration just as well as SFML.
@OP,
I'm working on a cross-platform console emulator right now, and I make a distinction between the logical console and a canvas. The logical console is a container that stores a buffer of all the drawing cells and provides drawing methods. The canvas is an interface that represents the rendering context. The drawing methods of the console send instructions on only what has changed to the canvas, so only new drawing updates take place. The canvas also obviously takes care of resizing and analyzing font metrics to determine how to best fill the screen.
A separate asset manager can parse a file that contains a mapping of keys to assets that will serve as a library for the canvas. This way you can make it easy to modify the graphics with a simple set of instructions. Instead of passing characters, you pass keys through which the canvas looks up the appropriate glyph. You can then do animations logically by simply swapping out the appropriate glyph. You don't even need much foresight here- simply have a key for every action you implement. The animation won't get used unless it exists.
-
You might be surprised how fast poorly written rendering code will bog a card down. Also, how well you write your rendering logic will determine how portable your application is.
This is very true, 3d action games were here way before hardware acceleration. Preparation for rendering is where costly mistakes are easy to make, by copying and/or constructing a lot of short-lived objects.
-
@OP,
I'm working on a cross-platform console emulator right now, and I make a distinction between the logical console and a canvas. The logical console is a container that stores a buffer of all the drawing cells and provides drawing methods. The canvas is an interface that represents the rendering context. The drawing methods of the console send instructions on only what has changed to the canvas, so only new drawing updates take place. The canvas also obviously takes care of resizing and analyzing font metrics to determine how to best fill the screen.
A separate asset manager can parse a file that contains a mapping of keys to assets that will serve as a library for the canvas. This way you can make it easy to modify the graphics with a simple set of instructions. Instead of passing characters, you pass keys through which the canvas looks up the appropriate glyph. You can then do animations logically by simply swapping out the appropriate glyph. You don't even need much foresight here- simply have a key for every action you implement. The animation won't get used unless it exists.
Hmm. However, when one is using the console, does one call the drawing methods for every tile in the world that is visible on the screen each render, or only those that have changed? As the latter means the game logic then needs to inform the rendering system every time something is done to the map. Is that an entanglement of game logic and rendering?
And if one wants to do "chase" scrolling (which I mentioned earlier), where the view always scrolls as the player moves, doesn't this require every character on the console to be changed anyway? So how much benefit is there from adding the extra code complexity (as opposed to a simple every-tile redraw loop)?
-
@OP,
I'm working on a cross-platform console emulator right now, and I make a distinction between the logical console and a canvas. The logical console is a container that stores a buffer of all the drawing cells and provides drawing methods. The canvas is an interface that represents the rendering context. The drawing methods of the console send instructions on only what has changed to the canvas, so only new drawing updates take place. The canvas also obviously takes care of resizing and analyzing font metrics to determine how to best fill the screen.
A separate asset manager can parse a file that contains a mapping of keys to assets that will serve as a library for the canvas. This way you can make it easy to modify the graphics with a simple set of instructions. Instead of passing characters, you pass keys through which the canvas looks up the appropriate glyph. You can then do animations logically by simply swapping out the appropriate glyph. You don't even need much foresight here- simply have a key for every action you implement. The animation won't get used unless it exists.
Hmm. However, when one is using the console, does one call the drawing methods for every tile in the world that is visible on the screen each render, or only those that have changed? As the latter means the game logic then needs to inform the rendering system every time something is done to the map. Is that an entanglement of game logic and rendering?
No. That's the responsibility of your canvas. The console represents the graphical state, whereas the canvas is the rendering context. For a Roguelike, you don't really need to use double-buffering, so you aren't redrawing the scene every frame-- instead, you just pass changes to the canvas and the canvas makes adjustments accordingly. Regardless, it's the canvas's responsibility to determine how to handle changes to its own state.
Correct- the Console should not be called in game logic, See below.
And if one wants to do "chase" scrolling (which I mentioned earlier), where the view always scrolls as the player moves, doesn't this require every character on the console to be changed anyway? So how much benefit is there from adding the extra code complexity (as opposed to a simple every-tile redraw loop)?
The Console is a logical object but it's part of the application logic, not the game logic. The other half of it is a window manager. The WM divvies up console regions to different panes and manages the focus pane (the one that gets input). Panes contain widgets and represent pop-up messages, the information log, any UI element, and the game Camera. The WM and its panes could be completely static and super simple, but the abstraction will be valuable regardless.
The Camera is a logical object that describes what drawing information will get sent to its containing pane. Basically, you have a map of the game. The camera describes what region of that map to draw-- in the case of a scrolling game, it will be centered on the player character (and most likely collide with the edge of the map). In this way, the pane can be moved around without affecting what the player sees. You could also have multiple panes with cameras focusing on different things. This is kind of the basics of MVC (model, view, controller).
Oh-- and you will have things that don't get updated on your screen every frame-- UI elements, some duplicate adjacent tiles, etc. It's good practice and should be trivial to implement.
-
Hmm. However, when one is using the console, does one call the drawing methods for every tile in the world that is visible on the screen each render, or only those that have changed? As the latter means the game logic then needs to inform the rendering system every time something is done to the map. Is that an entanglement of game logic and rendering?
No. That's the responsibility of your canvas. The console represents the graphical state, whereas the canvas is the rendering context. For a Roguelike, you don't really need to use double-buffering, so you aren't redrawing the scene every frame-- instead, you just pass changes to the canvas and the canvas makes adjustments accordingly. Regardless, it's the canvas's responsibility to determine how to handle changes to its own state.
Correct- the Console should not be called in game logic, See below.
So what does the game logic do when it makes a change to the map? Also, isn't the canvas inside the console? Where are these drawing methods exposed to? What calls them? I thought that the canvas was behind the drawing methods, buried inside the console, and thought the drawing methods are then called by some outer render/update routine. So what calls the drawing methods and how?
And if one wants to do "chase" scrolling (which I mentioned earlier), where the view always scrolls as the player moves, doesn't this require every character on the console to be changed anyway? So how much benefit is there from adding the extra code complexity (as opposed to a simple every-tile redraw loop)?
The Console is a logical object but it's part of the application logic, not the game logic. The other half of it is a window manager. The WM divvies up console regions to different panes and manages the focus pane (the one that gets input). Panes contain widgets and represent pop-up messages, the information log, any UI element, and the game Camera. The WM and its panes could be completely static and super simple, but the abstraction will be valuable regardless.
The Camera is a logical object that describes what drawing information will get sent to its containing pane. Basically, you have a map of the game. The camera describes what region of that map to draw-- in the case of a scrolling game, it will be centered on the player character (and most likely collide with the edge of the map). In this way, the pane can be moved around without affecting what the player sees. You could also have multiple panes with cameras focusing on different things. This is kind of the basics of MVC (model, view, controller).
Oh-- and you will have things that don't get updated on your screen every frame-- UI elements, some duplicate adjacent tiles, etc. It's good practice and should be trivial to implement.
How is the pane-handling done with regards to the console object? How does the WM stand in relation to the console object? What provides the features to draw to the panes? What draws to them -- i.e. if there's a pane for the game area, what draws to this?
Also, does this console/canvas/WM thing also work when considering a tile-based game? What if you want to make it flexible so one can add both a tile and text version?
-
How is the pane-handling done with regards to the console object? How does the WM stand in relation to the console object? What provides the features to draw to the panes? What draws to them -- i.e. if there's a pane for the game area, what draws to this?
A WM arranges Panes on the screen in a logical manner and sets which pane has focus (gets input).
A pane is a managed area of the console. Each pane draws to an area of the console that is provided by the WM. Typically , a pane will contain widgets. A widget is managed GUI feature-- a widget could be a line of text, a checkbox, a button, or a paragraph of scrollable text. A pane will likely contain many widgets, each with their own descriptions of how they should be drawn.
As mentioned before, a Camera draws to the pane for the game area. The current game area consists of a logical map of objects (your game grid). The Camera is a logical object that describes what area of the map to render to the pane. If the offset of the camera is consistent with the player's avatar, then it will be a 'chase'-camera (the camera could just be a location and then the pane just draws as much as it can). The camera doesn't really do anything but look at the state of the game (which is the map) and send information.
Now-- if you want to add fancy animations or effects and such, you can use some looping update calls to the map (like, to draw a colorful effect or something), which should cascade through your camera to tell the pane to update the console.
Also, does this console/canvas/WM thing also work when considering a tile-based game? What if you want to make it flexible so one can add both a tile and text version?
Anytime you want flexibility, you just make an abstraction. The difference between a tile and text version is nothing more than how assets are managed. The asset tied to a goblin, for example, isn't 'g,' but a unique-identifier. Then you can have a config file that lists mappings of unique-ids to assets. When the canvas gets text, it draws text, when it gets a unique-id, it looks up the appropriate asset, caches it, and draws that. It could be either a text or a tile- doesn't matter.
If the sprite/tile is animated, then the canvas will need a way to manage that on its end.
-
How is the pane-handling done with regards to the console object? How does the WM stand in relation to the console object? What provides the features to draw to the panes? What draws to them -- i.e. if there's a pane for the game area, what draws to this?
A WM arranges Panes on the screen in a logical manner and sets which pane has focus (gets input).
A pane is a managed area of the console. Each pane draws to an area of the console that is provided by the WM. Typically , a pane will contain widgets. A widget is managed GUI feature-- a widget could be a line of text, a checkbox, a button, or a paragraph of scrollable text. A pane will likely contain many widgets, each with their own descriptions of how they should be drawn.
As mentioned before, a Camera draws to the pane for the game area. The current game area consists of a logical map of objects (your game grid). The Camera is a logical object that describes what area of the map to render to the pane. If the offset of the camera is consistent with the player's avatar, then it will be a 'chase'-camera (the camera could just be a location and then the pane just draws as much as it can). The camera doesn't really do anything but look at the state of the game (which is the map) and send information.
Now-- if you want to add fancy animations or effects and such, you can use some looping update calls to the map (like, to draw a colorful effect or something), which should cascade through your camera to tell the pane to update the console.
Also, does this console/canvas/WM thing also work when considering a tile-based game? What if you want to make it flexible so one can add both a tile and text version?
Anytime you want flexibility, you just make an abstraction. The difference between a tile and text version is nothing more than how assets are managed. The asset tied to a goblin, for example, isn't 'g,' but a unique-identifier. Then you can have a config file that lists mappings of unique-ids to assets. When the canvas gets text, it draws text, when it gets a unique-id, it looks up the appropriate asset, caches it, and draws that. It could be either a text or a tile- doesn't matter.
If the sprite/tile is animated, then the canvas will need a way to manage that on its end.
So then I think of something like this:
Canvas object:
has methods to "receive a change" (what does that mean?)
actually draws to the screen (??)
Console object:
has a Canvas object inside it.
has drawing methods
has methods to get input from the keyboard (yes? or no?)
has a method to designate an area as a pane
Pane:
Has drawing methods too(?)
Has event handlers to trap on input(?)
Widget:
Has drawing function
Has event handlers to trap input
Window manager:
Holds panes and associated UI widgets. Manages
which one has "focus".
Takes in input events and passes them to the widget and pane with focus
Camera:
Has a position, map reference, and function to draw the map to its associated Pane. (how much of the map does this tell the lower facilities to draw?)
Is this about right? What about the spots where I'm still unclear (marked with "?"s)
-
Sorta.
Canvas is an interface that gets filled out by platform dependent code. Essentially, you would put a wrapper for ie. Flash, OpenGL, or Curses here. My canvas interface really only has one method, that is to commit an array of drawing instructions. In this case, each instruction contains a glyph (either a character or a key), a position, foreground color, and background color (I have everything nullable so that we can only pass what information we want).
Console is a logical object that describes the state of the canvas. Drawing information sent to the console is cached and propagated to the canvas for actual rendering. While the Console/Canvas CAN be rationalized as a single interface, I personally prefer composition over inheritance here. The Console does NOT take input. It's purely an obfuscation of the platform rendering library (curses, opengl, flash, html5, etc). We'll have another object that handles the input and mapping, but I don't think that should be bundled with the console (though it could be). "You might say- wait a minute! Any platform rendering library is also going to have a way to get input- shouldn't the console also handle that information?" It's too monolithic for my taste. I want the console to be a one-way object for writing data to the screen. I'd prefer to have some other object deal with the I/O, even if it uses the same reference to the platform that the console does.
The WM->Pane->Widget relationship could be thought of as a tree. The WM is a relative root node, a Pane is a sibling, and a widget is a leaf. The WM is really just a special Pane that caches what child pane is getting input. Any pane could contain any number of other panes, using the same logic as the WM. A widget is an actual drawing object. While a pane may have a background color (or even just a tint), it isn't sending very many drawing instructions (though it could have a title and scroll-bars)- just telling the widgets when they need to send theirs. The purpose of the Pane is to manage the local drawing context-- That is, the relative origin that it's children are being drawn to. In this sense, each widget/pane has a local and global position- where a pane inherently describes a rendering depth.
The Camera should be pretty straightforward. Your logical map stores spatial information. The camera just starts from one corner and iterates through to the next grabbing the asset key of each entity in the map (an empty space or floor tile is also an entity).
Asset Manager - Is a mapping of glyph-codes (an array key/index) to assets. You can ensure clarity by using strings as keys but an enumeration or consts might be more to your liking. The Canvas will receive a drawing instruction in the form of a glyph-code and color information, it will then pass the glyph-code into the asset manager to fetch which asset it is supposed to use, and then draw accordingly. The Asset Manager will likely read a configuration file that describes this mapping. Strings are cool because you can use no-brainer concatenations to describe different animations if you want. You can even encode information into the key to allow the Asset Manager to parse for and composite a group of glyphs-- such as if you want to show equipment and stuff. That sort of thing should be an afterthought though- as long as you keep things modular it should be easy to implement later.
-
So if I get this right, then the Camera loops through every tile/entity within its field of view each frame, while the actual rendering, which is done by the Canvas, only invokes the actual business-end platform-dependent draw calls for whatever tiles have actually changed? Or is that change-tracking done by the Console?
How simple should the "drawing instructions" be? Just one glyph per instruction? Does this mean that when drawing text (for example), we need a whole heap of drawing instructions for each and every character in the text?
Now I have another question with regards to the input thing: some low-level libraries like Curses have the input as bound up with the display in interesting ways. In particular, in Curses, one gets input with the wgetch() function, which takes a window as parameter, which is a display concept. Yet in our code we keep display and input separate. Does this mean that when using Curses, we should simply not use the Curses window mechanism at all, and just use the getch() and other non-"w" functions (which use the default window, "stdscr")?
-
So if I get this right, then the Camera loops through every tile/entity within its field of view each frame, while the actual rendering, which is done by the Canvas, only invokes the actual business-end platform-dependent draw calls for whatever tiles have actually changed? Or is that change-tracking done by the Console?
Change tracking is done by the console. Otherwise, yes.
How simple should the "drawing instructions" be? Just one glyph per instruction? Does this mean that when drawing text (for example), we need a whole heap of drawing instructions for each and every character in the text?
The Console should offer drawing short-cuts. I have 3 right now, draw_text, fill_rect, and plot_char. They all do the same thing in that they gather a list of drawing instructions and send them to the canvas.
I also have stack drawing methods so that a user can put many commands between a begin_draw() and an end_draw() without updating the canvas until the end_draw().
-
So if I get this right, then the Camera loops through every tile/entity within its field of view each frame, while the actual rendering, which is done by the Canvas, only invokes the actual business-end platform-dependent draw calls for whatever tiles have actually changed? Or is that change-tracking done by the Console?
Change tracking is done by the console. Otherwise, yes.
So does this mean the console must contain a buffer or something of the sort that stores the state of the screen so it knows what characters have been changed? What do you do when dealing with tiles, where the screen is now single pixels and can also have non-tile objects like text?
How simple should the "drawing instructions" be? Just one glyph per instruction? Does this mean that when drawing text (for example), we need a whole heap of drawing instructions for each and every character in the text?
The Console should offer drawing short-cuts. I have 3 right now, draw_text, fill_rect, and plot_char. They all do the same thing in that they gather a list of drawing instructions and send them to the canvas.
OK, but how simple should the drawing instructions the canvas uses be? Considering that a lot of low-level libraries like Curses and SDL do provide some features for text drawing -- wouldn't these be more efficient, or what (and on libraries without it, could be faked anyway)?
Also, another question about panes: the area where the game world is drawn is a pane, no? But don't panes contain widgets? What kind of "widgets" does it have that allow it to represent the game world?
-
So does this mean the console must contain a buffer or something of the sort that stores the state of the screen so it knows what characters have been changed? What do you do when dealing with tiles, where the screen is now single pixels and can also have non-tile objects like text?
Yes, but I'm talking specifically about a console emulator. The Console doesn't even know what the resolution of the application is, so it can't represent pixels (unless you increase the logical resolution of the console in such a way that the canvas can only render each tile as a pixel-- but that violates the entire point of emulating a console).
How is text not tiled? Each character is a tile just as any sprite would be.
OK, but how simple should the drawing instructions the canvas uses be? Considering that a lot of low-level libraries like Curses and SDL do provide some features for text drawing -- wouldn't these be more efficient, or what (and on libraries without it, could be faked anyway)?
You will use those to draw characters into tiles.
Also, another question about panes: the area where the game world is drawn is a pane, no? But don't panes contain widgets? What kind of "widgets" does it have that allow it to represent the game world?
You could say that the Camera is a widget.
Here, consider the following example.
https://dl.dropboxusercontent.com/u/10791198/Rebrogue.swf
And here are the drawing instructions for the above example. This is in HaXe, which is a language with a number of meta-programming features, so some of it may not make sense, but you can see the correlation between method calls and their effect in the above .swf.
var blueOnly = function(c:Int):Int
{
var t = c.toRGB();
t[0] = t[1] = 0;
return t.toHEX(); };
var gradient = function(x:Int,y:Int):Int return x*0x110011+y*0x001111;
var random = function(_,_,_):Int return Math.floor(Math.random() *0xffffff);
//colors may be an int or a function that takes up to 3 parameters- x, y and original color.
//this makes tinting and gradients a no-brainer
console.setDimension([16,6]);
console.clean();
console.begin_draw();
console.fill_rect([0,0],[16,6],random,random,"#");
console.fill_rect([0,0],[16,6],null,gradient); //null preserves the original information
console.plot_char([0,0],"X",0xffffff,0x0);
console.draw_text([0,1],"Blah blah blah",null);
console.fill_rect([0,0],[5,5],0xff00ff,0x0f0f0f);
console.fill_rect([12,3],[16,6],null,blueOnly);
console.fill_rect([12,0],[16,3],blueOnly);
console.end_draw();
The value of having a logical console is that we can perform additive drawing without sending more information to the renderer than we need to. Many of the tiles in the above example get drawn upon multiple times, but only one draw call is made for each modified tile (in end_draw). Some of these draws are a function of the original color or only modify colors without modifying text or vice versa. This provides support for transparency, tinting, lighting effects, and other cool things in a way that is concise and flexible.
This isn't getting into a WM, panes, or widgets as I haven't quite finished that up yet, but maybe seeing an example of how it could work will help you rationalize how it should look on the back-end.
Where would tiles fit in here?
In plot_char or fill_rect, if the string passed is length > 1, then the canvas can perform a library look-up to see if there is a corresponding sprite for that string.
-
"How is text not tiled? Each character is a tile just as any sprite would be."
Well, when I think of a game with graphical tiles (which is what I mean), I imagine a screen divided up by pixels. And graphical tiles and text characters need not be the same size, and may even overlap. This would seem to require pixel-level coordinates. So how then do you detect changes?
-
"How is text not tiled? Each character is a tile just as any sprite would be."
Well, when I think of a game with graphical tiles (which is what I mean), I imagine a screen divided up by pixels. And graphical tiles and text characters need not be the same size, and may even overlap. This would seem to require pixel-level coordinates. So how then do you detect changes?
I have been talking solely about console emulation. That means a discrete grid occupied by a single glyph- either a sprite (animated or not) or a text character. If you want to take full advantage of 2D rendering (and have floating text), then you're no longer emulating a console and just doing standard 2D graphics. Things like WMs and Panes still apply, and you may still want to emulate a console but have extra drawing effects, in which case you would just use your canvas to circumvent the console emulation.
There are a few techniques for optimizing 2D graphics, but if you're going all floaty text, then there are likely going to be additional graphical elements that could dirty your screen in unpredictable ways. In this case, you should blit and buffer, which your rendering library will probably do for you as long as you use it correctly.
-
Thanks again.
I have one more question, related to "panes": if "panes" get the input, does this mean that one would have a trap handler or something on the main game "pane" that runs the game logic (i.e. the logic routine is called from a "pane" trap handler) when a key is depressed and said pane has focus? This would seem to differ from the "game state object" approach mentioned in another thread of mine here, as here it seems all menus and what not can simply be implemented via trap handlers and focus-switching and so dispensing with the need for the dedicated game state object mechanism.
-
The WM can be the input handler-- all you do is pass the input down to the focused pane and let it decide. A pane will have its own keymap for whatever actions it can perform.
edit: Well-- your input handler will convert input into 'keys', which the pane will map to an action. Basics of keymapping.
-
The WM can be the input handler-- all you do is pass the input down to the focused pane and let it decide. A pane will have its own keymap for whatever actions it can perform.
edit: Well-- your input handler will convert input into 'keys', which the pane will map to an action. Basics of keymapping.
So am I right in my conclusion that it dispenses with the need for the "game state mechanism" for handling menus and so forth that I just mentioned?
Also, what about what I asked here about using Curses, with its "windows" and how they're tied up with the getting of input?:
http://forums.roguetemple.com/index.php?topic=3497.msg29553#msg29553
-
The WM can be the input handler-- all you do is pass the input down to the focused pane and let it decide. A pane will have its own keymap for whatever actions it can perform.
edit: Well-- your input handler will convert input into 'keys', which the pane will map to an action. Basics of keymapping.
So am I right in my conclusion that it dispenses with the need for the "game state mechanism" for handling menus and so forth that I just mentioned?
Also, what about what I asked here about using Curses, with its "windows" and how they're tied up with the getting of input?:
http://forums.roguetemple.com/index.php?topic=3497.msg29553#msg29553
I would likely make the input handler a separate logical object. All it does is handle the platform specific code for receiving input. It would then send meaningful interpretations of that input to the WM, who would then pass it on to a specific pane, from whence that pane's mapping of that input to some action would take effect.
A WM IS a state machine. the current state is the focused pane that receives input... Ah- I should clarify a little bit. A pane doesn't necessarily have to be nested within the logical boundaries of its parent pane-- for example, you may have a tiled pane open a floating pop-up menu. In this case, the menu would be registered with the WM instead of the parent pane, but the menu would still pass focus/data to its parent after its function has been served.
-
I would likely make the input handler a separate logical object. All it does is handle the platform specific code for receiving input. It would then send meaningful interpretations of that input to the WM, who would then pass it on to a specific pane, from whence that pane's mapping of that input to some action would take effect.
The thing I was wondering about though was that in Curses, the "window" mechanism seems like a natural one for implementing panes (especially with the Curses panel library), but Curses doesn't maintain that input/window separation, in fact it joins the two together. So it seems like to keep logical separation, you can't use the Curses window mechanism.
A WM IS a state machine. the current state is the focused pane that receives input... Ah- I should clarify a little bit. A pane doesn't necessarily have to be nested within the logical boundaries of its parent pane-- for example, you may have a tiled pane open a floating pop-up menu. In this case, the menu would be registered with the WM instead of the parent pane, but the menu would still pass focus/data to its parent after its function has been served.
However, should a pane "crop" whatever is drawn inside it (as opposed to floating on top)? Doesn't this require additional functionality/computations in the underlying Console or 2D graphics system?
Anyway, this seems neater than my original game state object system, it looks to resolve some of the various conundrums with that better. I like it and I think I might use it :)
-
The thing I was wondering about though was that in Curses, the "window" mechanism seems like a natural one for implementing panes (especially with the Curses panel library), but Curses doesn't maintain that input/window separation, in fact it joins the two together. So it seems like to keep logical separation, you can't use the Curses window mechanism.
Unfortunately, I'm not that familiar with curses. If you want to be able to swap out the renderer, I'm not sure if you can rely on a platform specific implementation of windows, unless it's possible to logically abstract the entire curses drawing interface. If you want to use curses specifically and don't have an interest in another drawing methodology, go for it. The I/O handling of panes is more conceptual-- if what curses does is intuitive and makes sense to you, go for it. I only prefer a top-level I/O mechanism so that I can modify raw input data if needs be (like if we want to catch shift/ctrl at the application level instead of the pane level).
However, should a pane "crop" whatever is drawn inside it (as opposed to floating on top)? Doesn't this require additional functionality/computations in the underlying Console or 2D graphics system?
It's up to you. In most cases, you will have a clear understanding of how you want your UI to look, so you don't need any special logic. You could crop or not-- just depends on what you want to do. The rules with which a pane renders its widgets depends on the pane. A pane is just the base container for widgets and should not set very hard restrictions on what can be done with them. In most cases a pane will represent a partition of the screen that you render specific objects to, but the way in which those objects are rendered might not limit their drawing instructions to within that pane. The pane is basically a local coordinate system for drawing groups of widgets.
-
I think I've got it figured out now. Thanks! Hopefully I'll be able to get this working.
-
I just wondered another thing: where would one put the various "higher-level" rendering functions, like to render boxes and other more complex shapes used to make widgets?
-
If we're still talking about a console emulator (that is, the ONLY BASIC drawing method we have is plot_char), some drawing utilities could be in the logical console. Every complex drawing method, in a console, is just an aggregation of plot_chars. So, to draw a widget, that widget will send the instructions necessary to draw itself.
This is oftentimes called a display list. You have a tree of drawable objects that contain other drawable objects that we traverse when drawing the scene (also called a scenegraph). Each node in the graph represents a coordinate (relative to the parent) and possibly drawing instructions (and transformations).
When we draw a widget, we've already scoped to a local point, we just need instructions to draw. We would probably make a library of common methods, like draw_circle and daw_rect, to make things simpler-- though, more importantly, we would try to make sure similar widgets use the same drawing instructions whenever possible or make it so that a widget is a composition of components-- like scrollbars, textfields, etc. At this point though it's just semantics.
-
If we're still talking about a console emulator (that is, the ONLY BASIC drawing method we have is plot_char), some drawing utilities could be in the logical console. Every complex drawing method, in a console, is just an aggregation of plot_chars. So, to draw a widget, that widget will send the instructions necessary to draw itself.
This is oftentimes called a display list. You have a tree of drawable objects that contain other drawable objects that we traverse when drawing the scene (also called a scenegraph). Each node in the graph represents a coordinate (relative to the parent) and possibly drawing instructions (and transformations).
When we draw a widget, we've already scoped to a local point, we just need instructions to draw. We would probably make a library of common methods, like draw_circle and daw_rect, to make things simpler-- though, more importantly, we would try to make sure similar widgets use the same drawing instructions whenever possible or make it so that a widget is a composition of components-- like scrollbars, textfields, etc. At this point though it's just semantics.
So would this "tree of drawable objects" be a separate data structure from the pane/widget one (even though, say, panes contain widgets -- doesn't this already form such a "tree"?) or not?
And would those "circle" ,"rect", etc. drawing methods in that "library" be just "free floating" functions and not class members? If members, what would they be members of? The console, as you seem to hint in the first part of the message?
-
If we're still talking about a console emulator (that is, the ONLY BASIC drawing method we have is plot_char), some drawing utilities could be in the logical console. Every complex drawing method, in a console, is just an aggregation of plot_chars. So, to draw a widget, that widget will send the instructions necessary to draw itself.
This is oftentimes called a display list. You have a tree of drawable objects that contain other drawable objects that we traverse when drawing the scene (also called a scenegraph). Each node in the graph represents a coordinate (relative to the parent) and possibly drawing instructions (and transformations).
When we draw a widget, we've already scoped to a local point, we just need instructions to draw. We would probably make a library of common methods, like draw_circle and daw_rect, to make things simpler-- though, more importantly, we would try to make sure similar widgets use the same drawing instructions whenever possible or make it so that a widget is a composition of components-- like scrollbars, textfields, etc. At this point though it's just semantics.
So would this "tree of drawable objects" be a separate data structure from the pane/widget one (even though, say, panes contain widgets -- doesn't this already form such a "tree"?) or not?
Yes, it's the same thing. 3D rendering works the same way, if you're curious. The position of an object is just an offset from its parent-- same idea for windows/panes/widgets etc. It's a little different with compositing, but that doesn't matter in this case.
And would those "circle" ,"rect", etc. drawing methods in that "library" be just "free floating" functions and not class members? If members, what would they be members of? The console, as you seem to hint in the first part of the message?
Doesn't matter really.
A good way to do it is to make it so that the Console accepts a 2D array of glyphs (struct representing character, foreground and background color) and a starting position. Then you could have a static class produce graphs to be passed into the Console. Alternatively, you could have these functions work directly with the console or be a part of the console-- it's up to you. Whatever makes most OOP sense. Since we're working with a logical console, it doesn't hurt to have advanced drawing operations there. However, proper OOP would put these methods in a helper object.
You could define a typedef for 2d glyphs called a Graph (or shape or whatever) and use that as the parameter for a console method "draw_graph." Then you can produce as many drawing libraries that you want, so long as they have methods that produce graph objects.
IE.
draw_rect(console, rect) //call drawing functions internally by passing the console
Library.draw_rect(console,rect) //for better organization put function in static object (most recommended)
Console.draw_rect(rect) //or for simpler libraries put directly in console (least recommended)
plot_rect(rect):graph //pass rectangle information and return a graph object
Library.plot_rect(rect):graph //from static class
Console.draw_graph(graph) // pass graph object into console drawing method.
The top set is faster, but gives you less flexibility. While the second set is a bit more memory intensive but will allow you to work directly with the graph object. Since the console acts as a logical buffer, it really isn't necessary to have a graph object at all, but you might find it useful-- especially for caching certain drawing methods-- IE, a circle of a particular radius could be easier to draw if you cache it as a graph after calculating it once.
-
Thanks again.
-
I now have another question, related to window managers: If a bunch of things happen during the turn logic (which would be triggered by keypresses going to the pane showing the game world) that require updating the screen, e.g. displaying text messages, or an explosion appears, or something -- should one mix calls to the window manager's redraw function with the logic like that, or handle that separately, like by passing events to the window manager to be handled later in some main message/event loop where all getting of input and redraws are centralized? If separate, then how does one preserve the ordering and sequentiality of those things? Note that just having an event queue with "redraw events" added to it wouldn't seem to work, since the windows' states would be changing, which means that by the time that queue is finally handled after the logic completes, the windows have changed state several times and the redraws would only redraw to their last state at the time control returns to the event loop.
I suppose this ties in with the "separating render and logic" concern I mentioned in another thread here, but am curious as to how it's done in the context of this window manager-based system.
-
I now have another question, related to window managers: If a bunch of things happen during the turn logic (which would be triggered by keypresses going to the pane showing the game world) that require updating the screen, e.g. displaying text messages, or an explosion appears, or something -- should one mix calls to the window manager's redraw function with the logic like that, or handle that separately, like by passing events to the window manager to be handled later in some main message/event loop where all getting of input and redraws are centralized? If separate, then how does one preserve the ordering and sequentiality of those things?[/quote]
If you are not using real-time rendering, then a change to the game state should inform a window that it needs to redraw (nothing else will!). In real-time rendering, the window is either checking the game state each frame to see if it needs to redraw, or it waits to receive incoming events and redraws.
Each window just reads the state of the game and draws that information in a way that is pertinent to that particular window (status bars, maps, text notifications- etc). You either need to tell a window to update or have the window always updating.
Note that just having an event queue with "redraw events" added to it wouldn't seem to work, since the windows' states would be changing, which means that by the time that queue is finally handled after the logic completes, the windows have changed state several times and the redraws would only redraw to their last state at the time control returns to the event loop.
You don't need an event queue. You simply tell a window when it is dirty and that it needs to redraw.
I suppose this ties in with the "separating render and logic" concern I mentioned in another thread here, but am curious as to how it's done in the context of this window manager-based system.
Separation of rendering and logic has more to do with calling drawing instructions within the logic of the game. Telling a graphical object to redraw is fine. The purpose of keeping them separate is so that we can rewrite either one without effecting the other.
In real-time games, it's common to update the rendering at 60 HZ and update the logic a little faster, say, 100 HZ. The idea is that the logic is always updating slightly faster than rendering, so that you get a more fluid visual experience. If logic updated slower, objects would appear to move in a choppy manner. To optimize render and logical updates, their code needs to be in separate places. The Logical side of the application acts upon the game state, modifying and updating it in various ways, contingent upon input. The Rendering side reads the game state, in an agnostic way, and prints the output. This is a basic I/O relationship.
In a roguelike, this is a complete non-issue, but I think the example may help to illustrate why you want to keep it separate. After all, in the text messages, we need to send explicit information to the text message window (game log). That means calling something like "tmWindow.log(myEnum.combat,combatInfo,whatever)" for each game interaction that occurs. The Window does the drawing, but the logic tells it what to draw. The main point is that the Logic isn't deciding how that information is drawn.
-
Thanks. Now I understand the "separation" part better.