Evalutation for the isometric game

edited December 2014 in General discussions
Hi,
I don't know if this is the right forum for this kind of question, so please move the topic if not :)

I'm trying to evaluate the orx engine for creating the 2D isometric tile based game. I'm experienced C++ programmer and we're moving away from the unity (we would need to rewrite quite a bit of code to orx but that isn't a problem).

As I have looked through the documentation and I see the engine is data driven. But is it possible to use it without any .ini files and just set all the stuff only using API (creating viewport, animations, ...)?

Another question that is bothering me is how to handle the draw order? I've searched the forum and found one topic which is discussing the layering, but I don't know it is what I meant. I have a map which is created from the tiles (sprites - in unity I'm using the custom mesh for the terrain). Then I have some stuff as vegetation etc. And the last I have some player. Because it is isometric, each stuff that is on the map (vegetation, building wall, player, ...) need to have some defined order in which it should be drawn/rendered, that is if the player y is position is lesser that the tree one, the tree should be rendered first, the player as second. This order need to dynamically change for player because sometime he is standing "in front" of the tree and sometimes "behind" the tree.
In unity I was solving this using order in layer, so each object has some specific order which was basically the y position of the sprite.
In the topic I found on this forum I've found the it could be partially solved using the Z position (I don't like this particular method). The another solution was using the orxRENDER_EVENT_OBJECT_START event but don't know what should I change then, maybe also the Z position? Or is there some other option?

I would be glad if someone could answer my question.

Thank You

Trigve

Comments

  • edited December 2014
    Trigve wrote:
    Hi,
    I don't know if this is the right forum for this kind of question, so please move the topic if not :)

    Hi Trigve and welcome here!
    Here is as good a place as anywhere to ask such questions, no worries. :)
    I'm trying to evaluate the orx engine for creating the 2D isometric tile based game. I'm experienced C++ programmer and we're moving away from the unity (we would need to rewrite quite a bit of code to orx but that isn't a problem).

    Out of curiosity, do you mind me asking why you decided to move away from Unity?
    As I have looked through the documentation and I see the engine is data driven. But is it possible to use it without any .ini files and just set all the stuff only using API (creating viewport, animations, ...)?

    Yes, almost everything (there's only one real exception, later on this) can be achieved by code instead of config, but most times it'll make things more verbose or will require more work. Actually, if you take a look at all the function suffixed with *FromConfig, you'll see that they mostly convenience wrapper reading data from the config repository and calling public API to process them.

    The only real exception are timelines. Right now timelines can only be expressed in config. But even in that case, you can modify the content of the config repository at runtime and then create a timeline as if the config data came from a .ini file.
    As a matter of fact, you can always set all your config data programmatically using the orxConfig_Set* API and still use all the *FromConfig() convenience wrappers. In this case you get both the advantage of the config repository while still using code instead of .ini files.

    If you don't like having external config/data files, there are a couple of advanced options based on using a custom init sequence (instead of orxExecute()) combined with hooking your own resource handling code.
    If you simply don't like the custom .ini file format, you can also do your own "serialization" with the help of the orxConfig_Set* API.
    Another question that is bothering me is how to handle the draw order? I've searched the forum and found one topic which is discussing the layering, but I don't know it is what I meant. I have a map which is created from the tiles (sprites - in unity I'm using the custom mesh for the terrain). Then I have some stuff as vegetation etc. And the last I have some player. Because it is isometric, each stuff that is on the map (vegetation, building wall, player, ...) need to have some defined order in which it should be drawn/rendered, that is if the player y is position is lesser that the tree one, the tree should be rendered first, the player as second. This order need to dynamically change for player because sometime he is standing "in front" of the tree and sometimes "behind" the tree.
    In unity I was solving this using order in layer, so each object has some specific order which was basically the y position of the sprite.
    In the topic I found on this forum I've found the it could be partially solved using the Z position (I don't like this particular method). The another solution was using the orxRENDER_EVENT_OBJECT_START event but don't know what should I change then, maybe also the Z position? Or is there some other option?

    There's something similar to layers in orx as well: it's called groups. You can have as many of them as you want.
    Object belong to a group (by default they all belong to a group named "default") and that group can be changed at runtime.
    A list of groups can be set on a camera as well (by default the list only contains the group "default").
    This is used to control the rendering order as groups are rendered in the order they appear in this list. Within a single group, objects are then ordered given their relative Z coordinate (depth) to the camera and finally batched according to their material (texture, shader, smoothing, etc).

    Camera, viewports and groups are the three entities defining render passes as well, but that's probably for another question later on. ;-)

    May I ask why you don't like the Z option? It's usually very handy to handle relative rendering order problems as well as actual depth effects, and in your case it sounds like you can simply tie the Y and Z values of object coordinates in your logic code.

    When the orxRENDER_EVENT_OBJECT_START is fired, it's too late to change the rendering order for this frame: sorting happened before those events are sent. This particular event is mostly used when you need to do some processing or rendering tied to a specific object. For example if you were to change its content, render a mesh or a curve instead of a sprite, render it in 3D, change internal render contexts that are not exposed by orx such as modifying the depth buffer, stencil buffer, etc...
    If all you do is render 2D sprites (even with custom shaders and/or animations), you are unlikely to need this event at all.
    I would be glad if someone could answer my question.

    Usually there's always someone around to answer within a few hours or a day max.
    Lemme know if you have any other questions or if you need any more details on the answer I wrote today.

    Cheers,

    iarwain
  • edited December 2014
    Hi Trigve, this is just a brief answer, so I hope this helps.

    Yes you are right, most things are ordered using the Z position of an object like:
    [AlienObject]
    Graphic		= AlienGraphic
    Body		= BallBody
    Position	= (930, 1090, -0.1)
    

    -0.1 is the Z position.

    But in addition to all objects using Z, there is also the concept of object groups. So each object can be in groups like:
    [TreeObject]
    Graphic		= TreeGraphic
    Position	= (50, 50, -0.1)
    Group = Nature
    
    [Rock1Object]
    Graphic		= Rock1Graphic
    Position	= (150, 90, -0.2)
    Group = Nature
    
    [GrassObject]
    Graphic		= GrassGraphic
    Position	= (150, 90, -0.2)
    Group = Background
    
    [BackgroundObject]
    Graphic		= BackgroundGraphic
    Position	= (0, 0, -0.1)
    Group = Background
    
    [PlayerObject]
    Graphic		= PlayerGraphic
    Position	= (100, 100, -0.2)
    Group = Sprites
    
    [Camera]
    GroupList = default #Background #Nature #Sprites
    

    The Camera holds the group list and this is the order of the rendering of the groups. Within each group, the order is determined by the Z position.

    If your player is with a certain zone of the tree(body part?) you either move the Z position to be behind the tree or change which group he is in.

    Iarwain will able to give you more specific advice but I hope this helps your research.
  • edited December 2014
    Heh, Iarwain, you beat me to it. :)

    There's also some additional discussion on groups here that might help: https://forum.orx-project.org/discussion/6864#Comment_6868
  • edited December 2014
    iarwain wrote:
    Hi Trigve and welcome here!
    Here is as good a place as anywhere to ask such questions, no worries. :)

    Thank you :)
    Out of curiosity, do you mind me asking why you decided to move away from Unity?

    Yes, of course. I (I'll be using here "I" instead of "We" because I'm the programmer and I'll be talking about programming stuff here).
    I'm using the unity with the 2DToolkit for our game. I've chosen the unity as first, because I liked the fast prototyping it allowed. But when you move forward and start doing the advanced stuff, you'll hit the places which is slowing you down.
    The first thing is the serialization of the data. Long story short, the serialization is based per game object basis (and per-component), that is you couldn't serialize the whole graph of the instances (the reference are saved but only for unity built-in objects but also in limited manner - only using the built-in unity serializer, so no no for C# serialization - but even this could be partially solved). And because C# assemblies are reloaded a lot of time (this is done when some code is changed, DLL rebuild, it happens all the time), you must handle the serialization somehow to not to lose the data - and then you need to make some parallel hierarchies, one class for "pure" C# and the another for the unity side and make connection between them.
    The whole unity system is based on the components of the game objects. It is nice design, but only for simple things. The order of execution of the components is only partially defined (using dialog when you could move up/down particular script - but sometimes you need to change order dynamically, i.e. you may need to call some method in custom order base on some condidition - could be partially solved with C# events). And therefor I was using components only for the stuff that are visualized in the scene (sprites, mesh, ...) and all the logic was in "pure" C# and was using only game object reference to the object. But this get complicated, because in "pure" C# you cannot reliably serialize the reference to the game object (and any unity based objects) so then you need to make the hack to solve it.
    Another thing is, that I'm using the .NET DLLs for all my code. But 2DToolkit plugin isn't make as DLL and couldn't be made as in the current state (long story short, it use some conditional compilation #if/#endif to mix editor/runtime stuff which couldn't be so easily separated and then there is the problem of the building the player when you mix editor/runtime unity assemblies ...). So then I need to workaround this also, needing to make some interfaces and some helper classes which also complicate a lot of stuff (this isn't much the fault of unity, but the design it is using is encouraging to code it like this).
    But unity has also positive sides, when you could make some custom editors for your terrain for instance, custom inspectors etc etc.
    When I'm reading it back now all this stuff maybe sound too rough but I didn't want to go to the details and don't know your unity experience level. So if you want I can go in the detail for the each part you want to discuss.

    Also in my first post I haven't express clearly. We *want* to move away from unity, but unless we don't find something that suits our need (or will be better in every aspect) we won't (this is because we've got a lot of stuff done already and I'm willing to rewrite it. But I have only limited time for it - and if I found the some other engine is better then Unity in some aspect but will need to spend a lot of time to rewrite everything, then we will not move. At least not now).
    Yes, almost everything (there's only one real exception, later on this) can be achieved by code instead of config, but most times it'll make things more verbose or will require more work. Actually, if you take a look at all the function suffixed with *FromConfig, you'll see that they mostly convenience wrapper reading data from the config repository and calling public API to process them.
    Can I use more config files and load it on demand? Because as I see it I could but what about the stuff that is actually loaded? Will it be overwritten with the new stuff?
    The only real exception are timelines. Right now timelines can only be expressed in config. But even in that case, you can modify the content of the config repository at runtime and then create a timeline as if the config data came from a .ini file.
    As a matter of fact, you can always set all your config data programmatically using the orxConfig_Set* API and still use all the *FromConfig() convenience wrappers. In this case you get both the advantage of the config repository while still using code instead of .ini files.

    If you don't like having external config/data files, there are a couple of advanced options based on using a custom init sequence (instead of orxExecute()) combined with hooking your own resource handling code.
    If you simply don't like the custom .ini file format, you can also do your own "serialization" with the help of the orxConfig_Set* API.
    I see, thanks.

    The one problem I having now is the location of the .ini files. I'm using the VS for coding and have the "Debug" dir in the project directory where all executables are build. And then have another dir "Runtime" where I have all the game data (and also the .ini) file. I'm setting the "Runtime" dir as the working dir when debugging but the orx couldn't find the .ini but it could find it if I place it directly to executable ("Debug" dir). Could I somehow set the working path for the orx?
    There's something similar to layers in orx as well: it's called groups. You can have as many of them as you want.
    Object belong to a group (by default they all belong to a group named "default") and that group can be changed at runtime.
    A list of groups can be set on a camera as well (by default the list only contains the group "default").
    This is used to control the rendering order as groups are rendered in the order they appear in this list. Within a single group, objects are then ordered given their relative Z coordinate (depth) to the camera and finally batched according to their material (texture, shader, smoothing, etc).

    Camera, viewports and groups are the three entities defining render passes as well, but that's probably for another question later on. ;-)

    May I ask why you don't like the Z option? It's usually very handy to handle relative rendering order problems as well as actual depth effects, and in your case it sounds like you can simply tie the Y and Z values of object coordinates in your logic code.

    I describe my problem in more details here. Maybe I'm wrong with my conclusion. If it is, please correct me.
    The first problem I needed to take into account when working with the unity is composite sprites. This is used for soldier which could have some replaceable stuff such as helmet, jacket, etc. Because it is isometric game I need to have for every animation for every direction, for every part of the body defined order in which it is rendered (because sometime arm is rendered as first, sometimes as last - and this could change also in 1 single animation). And I need to order the sprite based on world position as well as was describing in the first post. Because I'm using Unity free I didn't have render to texture and I needed to use 1 major ordering for the movement (that is, soldier is in front of the tree) and then minor ordering for the sprite parts (left hand is beyond the torso). So I was using order-in-layer for major ordering and Z position for minor ordering.
    But in theory I could use the Z position for major ordering and do sprite composition using "render to texture" (could I?). But the problem (I think) is, that the Z position could go from 0 - xxx where xxx (when Z will have the values of the Y position) could be some large numbers and then I would need to adjust the far/near planes of the camera either to accommodate the whole Z range (but what about Z buffer precision then?) or dynamically change the far/near plane of the camera when moving "down" (assuming the (0, 0) is top left corner of the map). Or am I missing something?
    When the orxRENDER_EVENT_OBJECT_START is fired, it's too late to change the rendering order for this frame: sorting happened before those events are sent. This particular event is mostly used when you need to do some processing or rendering tied to a specific object. For example if you were to change its content, render a mesh or a curve instead of a sprite, render it in 3D, change internal render contexts that are not exposed by orx such as modifying the depth buffer, stencil buffer, etc...
    If all you do is render 2D sprites (even with custom shaders and/or animations), you are unlikely to need this event at all.

    Another question I have. As I've mentioned, the map is in unity represented as flat mesh. The whole map is partitioned based on tile count. Then I have shader which is used on rendered map mesh where I can have splat textures etc. Could something like this done in orx? That is I generate the mesh procedurally and then use some shader on it?

    edit: I've found orxDisplay_DrawMesh() which could be used for mesh drawing. But it looks like it supports only 1 UV coordinate but I'll need at least 4 of them. [strike]But looking through the sources (orxDisplay.c in GLFW plugin) it looks like it shouldn't be the problem extending to more UV coords. Am I right?[/strike]
    edit2: Maybe I don't need mesh altogether and could do it only using multiple textures and shaders. But what about performance?

    And 1 more question :) I'm using raycasting for finding the objects under the mouse cursor in the unity. Is there some raycast support or do I need it to handle it differently?

    Thank you
    Trigve
  • edited December 2014
    Thanks for the reply,
    I've described my problem in more detail in the previous post.
    But I don't think that the trees and soldier could be in the different groups as they need to be sorted dynamically based on their Y world position. Only if I made each new group for the Y position (or for each new row of the tiles) :).

    Trigve
  • edited December 2014
    Trigve wrote:
    When I'm reading it back now all this stuff maybe sound too rough but I didn't want to go to the details and don't know your unity experience level. So if you want I can go in the detail for the each part you want to discuss.

    Thanks for all the details. I've never really used Unity myself but I have a few friends who did in the past. Most of them have since changed their mind and went for different options so I was curious about the reasons behind your own decision.
    Also in my first post I haven't express clearly. We *want* to move away from unity, but unless we don't find something that suits our need (or will be better in every aspect) we won't (this is because we've got a lot of stuff done already and I'm willing to rewrite it. But I have only limited time for it - and if I found the some other engine is better then Unity in some aspect but will need to spend a lot of time to rewrite everything, then we will not move. At least not now).

    Sounds reasonable to me. :)
    Can I use more config files and load it on demand? Because as I see it I could but what about the stuff that is actually loaded? Will it be overwritten with the new stuff?

    Yes, you can have as many config files on demand as you want. You can clear parts of the repository or update it with new content. During dev/debug phase you can also turn hotloading on and config content will be updated on the fly as soon as files are saved on disk.

    There's also a concept of inheritance, which you probably have already found, that provides lazy weak links, which allow for interesting technics.
    The one problem I having now is the location of the .ini files. I'm using the VS for coding and have the "Debug" dir in the project directory where all executables are build. And then have another dir "Runtime" where I have all the game data (and also the .ini) file. I'm setting the "Runtime" dir as the working dir when debugging but the orx couldn't find the .ini but it could find it if I place it directly to executable ("Debug" dir). Could I somehow set the working path for the orx?

    You can do so, but I'd do it slightly differently if I were you: I'd simply redirect loading from Debug to Runtime. Assuming your main .ini file is called MyGame.ini and is placed in /Runtime, I'd also create a MyGame.ini in /Debug that would only contain:
    @../Runtime/MyGame.ini@
    

    If it were me, I'd actually redirect both locations (Runtime and Debug) to a central Data folder, but it's really up to you and I know we all have different ways of organizing our workspace.

    Also keep in mind that, when releasing your game, there's a tool coming with orx that can flatten and/or encrypt a whole hierarchy of config files. It's a command line tool called orxCrypt.
    I describe my problem in more details here. Maybe I'm wrong with my conclusion. If it is, please correct me.
    The first problem I needed to take into account when working with the unity is composite sprites. This is used for soldier which could have some replaceable stuff such as helmet, jacket, etc. Because it is isometric game I need to have for every animation for every direction, for every part of the body defined order in which it is rendered (because sometime arm is rendered as first, sometimes as last - and this could change also in 1 single animation). And I need to order the sprite based on world position as well as was describing in the first post. Because I'm using Unity free I didn't have render to texture and I needed to use 1 major ordering for the movement (that is, soldier is in front of the tree) and then minor ordering for the sprite parts (left hand is beyond the torso). So I was using order-in-layer for major ordering and Z position for minor ordering.
    But in theory I could use the Z position for major ordering and do sprite composition using "render to texture" (could I?). But the problem (I think) is, that the Z position could go from 0 - xxx where xxx (when Z will have the values of the Y position) could be some large numbers and then I would need to adjust the far/near planes of the camera either to accommodate the whole Z range (but what about Z buffer precision then?) or dynamically change the far/near plane of the camera when moving "down" (assuming the (0, 0) is top left corner of the map). Or am I missing something?

    Firstly, regarding the render to texture feature, you can do it with orx and it's actually rather straightforward to do and almost doesn't require any line of code. You can also use MRT (Multiple Render Target) as well if the hardware supports it (so not with OpenGL ES < 3.x), if need be.
    Here's a simple example I put together a long time ago that shows how it can be done: http://orx-project.org/wiki/en/orx/tutorials/community/iarwain/compositing

    Note that orx didn't have the object group feature back then, so the filtering was done based on the camera frustum and the objects' Z coordinate. I should probably update that tutorial at some point to use object groups instead.

    Now, back to the sorting, if you know the upper limit of Z value, you can simply normalize everything between 0 and 1 and keep everything within that range. It's not a requirement, of course. You shouldn't have any precision problems unless you deal with very large numbers as the Z coordinate is stored on a 32-bit float value. If you don't want to normalize the Z value, you can of course modify the camera's frustum if you'd rather go down that road, but that sounds more trouble than what it's worth if you ask me.
    Another option would be to always recompute the Z value for objects that currently are going to be rendered (when listening to the global render start event, for example), but again, that's more trouble than what it's worth (and it wouldn't be as efficient anyway).

    For the normalization option, it can be done offline, upon object creation (orxOBJECT_EVENT_CREATE) or upon an object's update. How were you doing it precisely with Unity? Maybe I'm missing something, please let me know if that's the case.
    Another question I have. As I've mentioned, the map is in unity represented as flat mesh. The whole map is partitioned based on tile count. Then I have shader which is used on rendered map mesh where I can have splat textures etc. Could something like this done in orx? That is I generate the mesh procedurally and then use some shader on it?

    Yes, definitely, but you've already answered that question yourself. :) As you've also found, adding extra UV coordinates is rather straightforward. I'll add support for those in a generic way as soon as I'll add support for custom vertex shaders.
    Depending on which hardware you target, you'll probably also have to update orx's internal shaders to support additional UVs, but that's only a couple of lines of OpenGL code.
    The way I deal with this in my own current project (for normal/specular mapping) is by using a single UV coordinate + offsets (and scales, if needed) to calculate additional UV coordinates within the shader itself. It does add dependent texture lookups, but in my case they never were a performance issue.
    edit: I've found orxDisplay_DrawMesh() which could be used for mesh drawing. But it looks like it supports only 1 UV coordinate but I'll need at least 4 of them. [strike]But looking through the sources (orxDisplay.c in GLFW plugin) it looks like it shouldn't be the problem extending to more UV coords. Am I right?[/strike]
    edit2: Maybe I don't need mesh altogether and could do it only using multiple textures and shaders. But what about performance?

    It really depends on the details of your setup. In the past I've used both mesh + shader and simple full screen quad + everything done in shader approach. The choice was made on a per case basis and most of the time it's easy enough to setup to try and check the performance rather than speculate about them.
    If your map can be easily calculated by a shader within a full screen quad, I'd start with that option as it might be the simplest one to setup.
    And 1 more question :) I'm using raycasting for finding the objects under the mouse cursor in the unity. Is there some raycast support or do I need it to handle it differently?

    Yes, if you mean picking at a given position, you should look at orxObject_Pick() and its box alternative. They both support group filtering as well.

    As long as we're talking about this, have you found the C++ wrapper I wrote called Scroll? It adds a few interesting features to orx itself (including a rather simple in-game level editor) and is only composed of a few header files (nothing pre-compiled). I use it for all my projects to easily bind C++ classes to config section as well as use convenience virtual method calls for most common events.
    Acksys started writing a couple of starter tutorials about it here: http://orx-project.org/wiki/en/orx/tutorials/community/acksys

    As always, don't hesitate if you have any questions on orx (or Scroll) or if any answer wasn't satisfying.

    Cheers,

    iarwain
  • edited December 2014
    iarwain wrote:
    Yes, you can have as many config files on demand as you want. You can clear parts of the repository or update it with new content. During dev/debug phase you can also turn hotloading on and config content will be updated on the fly as soon as files are saved on disk.

    There's also a concept of inheritance, which you probably have already found, that provides lazy weak links, which allow for interesting technics.
    Ok, will try it out.
    You can do so, but I'd do it slightly differently if I were you: I'd simply redirect loading from Debug to Runtime. Assuming your main .ini file is called MyGame.ini and is placed in /Runtime, I'd also create a MyGame.ini in /Debug that would only contain:
    @../Runtime/MyGame.ini@
    

    If it were me, I'd actually redirect both locations (Runtime and Debug) to a central Data folder, but it's really up to you and I know we all have different ways of organizing our workspace.

    Also keep in mind that, when releasing your game, there's a tool coming with orx that can flatten and/or encrypt a whole hierarchy of config files. It's a command line tool called orxCrypt.

    What do you mean by "flatten"? Maybe I don't get it, but if I put some .ini alongside the executable, I need to use the relative paths there. So when I then release the game should I also change the .ini? Because it wouldn't be in the "Debug" folder now and all the paths in the .ini would be wrong, or wouldn't they?
    Firstly, regarding the render to texture feature, you can do it with orx and it's actually rather straightforward to do and almost doesn't require any line of code. You can also use MRT (Multiple Render Target) as well if the hardware supports it (so not with OpenGL ES < 3.x), if need be.
    Here's a simple example I put together a long time ago that shows how it can be done: http://orx-project.org/wiki/en/orx/tutorials/community/iarwain/compositing

    Yes I've already read the stuff about MRT.
    Note that orx didn't have the object group feature back then, so the filtering was done based on the camera frustum and the objects' Z coordinate. I should probably update that tutorial at some point to use object groups instead.

    Now, back to the sorting, if you know the upper limit of Z value, you can simply normalize everything between 0 and 1 and keep everything within that range. It's not a requirement, of course. You shouldn't have any precision problems unless you deal with very large numbers as the Z coordinate is stored on a 32-bit float value. If you don't want to normalize the Z value, you can of course modify the camera's frustum if you'd rather go down that road, but that sounds more trouble than what it's worth if you ask me.

    Even that Z position is 32 bit float, isn't the depth buffer only 16 bit (in most cases)? Anyway, even 16 bits could be satisfying, maybe.
    I don't know if could I could normalize the Z position. Maybe I could after loading the map and computing the max map height.
    I've also done dynamic frustum change in unity. I've used 2 cameras, 1 for terrain, which has near/far constants and another camera for dynamic stuff. There I've changes the frustum on the Y position of the camera. It was working, but it wasn't very clean solution.
    Another option would be to always recompute the Z value for objects that currently are going to be rendered (when listening to the global render start event, for example), but again, that's more trouble than what it's worth (and it wouldn't be as efficient anyway).

    For the normalization option, it can be done offline, upon object creation (orxOBJECT_EVENT_CREATE) or upon an object's update. How were you doing it precisely with Unity? Maybe I'm missing something, please let me know if that's the case.

    For major ordering I was just using function for setting the position of the sprite (I don't use physics whatsoever). So when the position of the sprite changes, a I've normalized values (to be always pixel aligned) and then set the order in layer based on the tile Y value (the first prototype was using Y position of the sprite, but this isn't necessary as we need to sort only based on the tile Y value - i.e. if tile (5,6) has the Y position of ~ 192 we don't need to use this position, we could only use Y value, that is 6 because there couldn't be anything between 5 - 6). For minor ordering I was watching when the animation frame changed, read the actual configuration of the given frame (what animation it is, what part of the body, etc etc) and adjust the Z position of the given part.
    Yes, definitely, but you've already answered that question yourself. :) As you've also found, adding extra UV coordinates is rather straightforward. I'll add support for those in a generic way as soon as I'll add support for custom vertex shaders.
    Depending on which hardware you target, you'll probably also have to update orx's internal shaders to support additional UVs, but that's only a couple of lines of OpenGL code.
    The way I deal with this in my own current project (for normal/specular mapping) is by using a single UV coordinate + offsets (and scales, if needed) to calculate additional UV coordinates within the shader itself. It does add dependent texture lookups, but in my case they never were a performance issue.

    I've looked at the OGL code but I don't know if I understand it correctly. So I don't want to modify it right now to support more UVs.
    But I don't want to use generic shader. I would using only custom shader for the terrain. [strike]Couldn't then I pass the custom vec3/vec2 values to shader and then use it there as UV coordinates?[/strike]
    edit: Looks like I will still need more coordinates because this data needs to be per vertex.
    It really depends on the details of your setup. In the past I've used both mesh + shader and simple full screen quad + everything done in shader approach. The choice was made on a per case basis and most of the time it's easy enough to setup to try and check the performance rather than speculate about them.
    If your map can be easily calculated by a shader within a full screen quad, I'd start with that option as it might be the simplest one to setup.
    I've tried already only simple terrain but only with 1 object per tile. The performance is OK. So maybe in future I'll rewrite it using mesh when more than 1 UV would be supported (and the performance would be a concern).
    Yes, if you mean picking at a given position, you should look at orxObject_Pick() and its box alternative. They both support group filtering as well.
    Ahh, thanks. Will look at it.
    As long as we're talking about this, have you found the C++ wrapper I wrote called Scroll? It adds a few interesting features to orx itself (including a rather simple in-game level editor) and is only composed of a few header files (nothing pre-compiled). I use it for all my projects to easily bind C++ classes to config section as well as use convenience virtual method calls for most common events.
    Acksys started writing a couple of starter tutorials about it here: http://orx-project.org/wiki/en/orx/tutorials/community/acksys
    Yes, I've looked at it but for now I just want to use pure orx to know how the things are working.
    As always, don't hesitate if you have any questions on orx (or Scroll) or if any answer wasn't satisfying.
    Thanks for your kind help. I'll try the other thins today to find out how it goes. Mainly those terrain shaders and then will try the simple animation.

    Thank you
    Trigve
  • edited December 2014
    So I've tried to add support for the multi-texturing mesh. I've created now function, added support for multitexturing to the OGL. But is isn't working :) The mesh is shown for the couple of frames and then it isn't show anymore (I'm showing it in the "update" function). The mesh vertices are used, I can see it while debugging. Also textures are shown somehow bad :) Using the orxDisplay_DrawMesh() everything is working fine (without multi-texturing, of course).
    I don't know if I want to spend more time with this, because I want to test other stuff here such as animations etc. But I would need to know if I could somehow assume that it would be implemented in the future (near future?) with or without my help.

    edit: Ok it is working actually, but multi-texturing isn't working (or maybe it is working but I don't have attached custom shader and the last texture binded is used in the default shader). And because mesh isn't an object in the scene, it is always drawn on the same place which complicate things even further.

    edit2: Thinking about it more, it looks like that what I would really need (and what would be also better) is to be able to use multiple texture (with multiple texture coordinates) with the orxOBJECT. So I could attach multiple sprites for 1 orxOBJECT and then blend it in the shader.

    edit3: What I need is something as described in this post https://forum.orx-project.org/discussion/5726#Comment_5728 . Blending between 2 or more textures where the another texture will be used to define how blending should be done (in shader).
  • edited December 2014
    Trigve wrote:
    What do you mean by "flatten"? Maybe I don't get it, but if I put some .ini alongside the executable, I need to use the relative paths there. So when I then release the game should I also change the .ini? Because it wouldn't be in the "Debug" folder now and all the paths in the .ini would be wrong, or wouldn't they?

    You can use a hierarchy of config files during dev phase, including them with the @file.ini@ syntax. For release, you can use orxCrypt to "flatten" all that hierarchy into a single config file, encrypted or not. Of course, this is optional, I was just pointing at this option.

    You do not need to use relative path, you can define all your paths to the resource module (either in the root config file or at runtime in code), and orx will go through all those paths (they're called storage as they don't actually need to be folders per se) till the requested resource is found.
    That's also a very handy way of pointing to a different storage based on resolution, platform, aspect ratio, etc...

    The paths don't need to exist either, for example, in the latest game I've been working on, here's how I defined the resource storage:
    [Resource]
    Config = data.pak # ../data/config
    Texture = data.pak # ../data/texture
    Sound = data.pak # ../data/sound
    Word = data.pak # ../data/word
    

    I added a resource type handler that knows how to handle a .pak file (actually I'm simply using a zip file there) and that's what I use for release builds. If the resource isn't found there (or if data.pak doesn't exist, which is the case in dev mode), orx will use the next storage in list to find resources (ie ../data/config for config, etc), which are the paths I use in dev mode.

    The Word resource is custom, orx doesn't use it natively, that's just where I store my dictionaries (it's a word game). More about the resource module here on the orx-dev group: https://groups.google.com/forum/#!forum/orx-dev or with this mini-tutorial: https://bitbucket.org/iarwain/resource

    In any case, any storage or direct config include that does not exist will simply be ignored. For example, I always have this include in all my projects, after all other includes:
    @dev.ini@
    

    That allows me to override any property and add debug features (like extra viewports, debug inputs, cheats and debug text) in dev.ini and this file never exists in release builds.
    Even that Z position is 32 bit float, isn't the depth buffer only 16 bit (in most cases)? Anyway, even 16 bits could be satisfying, maybe.

    The norm is usually 24-bit Z buffer + 8-bit stencil. Some older Android devices being the exception there. That being said, orx does not use a depth buffer internally but one can be created optionally upon user request.

    One of the features on my todo list will be to provide an optional early-Z rejection which might improve GPU performance on some devices when complex shaders are used: https://bitbucket.org/orx/orx/issue/54/add-optional-z-prepass
    This feature would use a Z buffer.
    I don't know if could I could normalize the Z position. Maybe I could after loading the map and computing the max map height.
    I've also done dynamic frustum change in unity. I've used 2 cameras, 1 for terrain, which has near/far constants and another camera for dynamic stuff. There I've changes the frustum on the Y position of the camera. It was working, but it wasn't very clean solution.

    You could save this info when exporting your map too. But if you don't want to bother with this and don't have coordinate values over 100000, using -Y as Z should work just fine provided you define a big enough frustum for your camera.
    For major ordering I was just using function for setting the position of the sprite (I don't use physics whatsoever). So when the position of the sprite changes, a I've normalized values (to be always pixel aligned) and then set the order in layer based on the tile Y value (the first prototype was using Y position of the sprite, but this isn't necessary as we need to sort only based on the tile Y value - i.e. if tile (5,6) has the Y position of ~ 192 we don't need to use this position, we could only use Y value, that is 6 because there couldn't be anything between 5 - 6). For minor ordering I was watching when the animation frame changed, read the actual configuration of the given frame (what animation it is, what part of the body, etc etc) and adjust the Z position of the given part.

    Using render-to-texture should simplify this as you won't have to deal with minor ordering at all.
    I've looked at the OGL code but I don't know if I understand it correctly. So I don't want to modify it right now to support more UVs.
    But I don't want to use generic shader. I would using only custom shader for the terrain. [strike]Couldn't then I pass the custom vec3/vec2 values to shader and then use it there as UV coordinates?[/strike]
    edit: Looks like I will still need more coordinates because this data needs to be per vertex.

    Yes, right now you can't add vertex data, only uniforms or reconstruct them per-pixel, which is more expensive of course (but it never had been a problem for me so far). Modifying orx to support additional vertex data should be rather straightforward for a specific case but would take more work in order to be generic (like it was done for uniforms in fragment shaders).
    I've tried already only simple terrain but only with 1 object per tile. The performance is OK. So maybe in future I'll rewrite it using mesh when more than 1 UV would be supported (and the performance would be a concern).

    Sounds good.
    Yes, I've looked at it but for now I just want to use pure orx to know how the things are working.

    Makes sense. =)
    Thanks for your kind help. I'll try the other thins today to find out how it goes. Mainly those terrain shaders and then will try the simple animation.

    My pleasure. Don't hesitate to ask whenever you have any doubts, I know that it's not always easy to find info on orx's internals itself and it's often easy to overlook features or details. =)

    I'll answer your other post later today as I have to go run an errand for now, sorry!

    Cheers,

    iarwain
  • edited December 2014
    Thanks for reply, will need to take a longer look at the .ini files.
    Yes, right now you can't add vertex data, only uniforms or reconstruct them per-pixel, which is more expensive of course (but it never had been a problem for me so far).

    You mean as waiting for the shader events and the passing the texture coordinates?
    Modifying orx to support additional vertex data should be rather straightforward for a specific case but would take more work in order to be generic (like it was done for uniforms in fragment shaders).

    I know, I know. I just need to know if it would be possible. Because I could make a terrain without any splatting now and then if all other things would be done I could handle it. But it would be shame if I would find then that it isn't somehow possible (I'm not implying that it isn't :) ).


    Anyway, thanks for the answers:)
    I'm also sorry that I'm editing my post, but I want to provide so much information as I can as the period of the question/answer is rather long. So every answer is valuable to me as the time is running for me:)
  • edited December 2014
    I'm back! :)

    Trigve wrote:
    So I've tried to add support for the multi-texturing mesh. I've created now function, added support for multitexturing to the OGL. But is isn't working :) The mesh is shown for the couple of frames and then it isn't show anymore (I'm showing it in the "update" function). The mesh vertices are used, I can see it while debugging. Also textures are shown somehow bad :) Using the orxDisplay_DrawMesh() everything is working fine (without multi-texturing, of course).
    I don't know if I want to spend more time with this, because I want to test other stuff here such as animations etc. But I would need to know if I could somehow assume that it would be implemented in the future (near future?) with or without my help.

    Sure, I can work on that on my return from vacation in January. My current focus is on turning the commands into a better scripting system and redesigning a seizable part of the animation system to make it more flexible, easy to use and support skeletal animation (as well as user-defined data and provide corresponding hooks).
    I still can look into multi-texturing as it should be a smaller task. That's actually how a few changes have happened in the past: MRT support and unicode, for example, were implementing within a couple of weeks of users requests.
    edit: Ok it is working actually, but multi-texturing isn't working (or maybe it is working but I don't have attached custom shader and the last texture binded is used in the default shader). And because mesh isn't an object in the scene, it is always drawn on the same place which complicate things even further.

    Yes, right now drawing a mesh is done in screen space however it's easy to bind it to a dummy object and get its transformation when listening to the orxRENDER_EVENT_OBJECT_START.
    That being said, there's also an issue about supporting mesh as first citizen instead of just quads.
    If that's very important for you, were you to choose using orx, I can bump it higher on my todo list (and I'm sure others will take advantage of this as well, including myself :)).
    edit2: Thinking about it more, it looks like that what I would really need (and what would be also better) is to be able to use multiple texture (with multiple texture coordinates) with the orxOBJECT. So I could attach multiple sprites for 1 orxOBJECT and then blend it in the shader.

    Mmh, that could be interesting, I need to think about it.
    However, even though there isn't currently any direct support for binding more than one graphic (sprite) to an object, it should be doable by using a custom shader. But again, the UVs would then need to be computed inside the fragment shader instead of being interpolated from vertex data (varying).
    I'll try to think about a way to support it natively in order to prevents from having to do those calculations.
    edit3: What I need is something as described in this post https://forum.orx-project.org/discussion/5726#Comment_5728 . Blending between 2 or more textures where the another texture will be used to define how blending should be done (in shader).

    You should already be able to do this, including defining the additional textures or a per object basis. The only issue being the UV computation as we talked earlier, and you'd need to do it yourself inside the fragment shader for now.

    On the side, I'll think more about how to add support for multi-graphics objects + have the UV passed as vertex data.
  • edited December 2014
    Trigve wrote:
    Thanks for reply, will need to take a longer look at the .ini files.

    Yes, don't hesitate with any questions you could have regarding them. There's more than meet the eyes at first. I still discover new inventive ways of using the whole system even now and then though I wrote it and have been using it for years. ;)
    You mean as waiting for the shader events and the passing the texture coordinates?

    Well I usually compute the actual coordinates within the shader itself, based on an offset that can either be constant (with careful texture organization) or fed through the shader event.
    This last option breaks batching though, so performances might be affected with hundred of objects. An other way to do it would be to have a given set of offsets and have a single shader variation for each offset. If uniforms aren't modified at runtime, batching will occur naturally (based on the UseCustomParam value).

    That being said, we just released a game a couple of months ago were we do normal mapping inside a fragment shader with values sent through the events for 50-200 sprites every frames and were still able to reach a good framerate, so again, it's something to consider on a per project basis.
    All this will be entirely solved the day I find a satisfying way of supporting custom vertex shaders and their associated vertex payload in a generic data-driven way.
    I know, I know. I just need to know if it would be possible. Because I could make a terrain without any splatting now and then if all other things would be done I could handle it. But it would be shame if I would find then that it isn't somehow possible (I'm not implying that it isn't :) ).

    Yes, that shouldn't be a problem at all. I can't promise it for the coming weeks as I'm on vacation, but it should happen before you really need it. :)
    Anyway, thanks for the answers:)
    I'm also sorry that I'm editing my post, but I want to provide so much information as I can as the period of the question/answer is rather long. So every answer is valuable to me as the time is running for me:)

    No worries for the edits, it's too bad this forum doesn't support additional notifications for them though. :)

    Again, if anything isn't clear enough, please let us know!

    Cheers,

    iarwain
  • edited December 2014
    Yes, right now drawing a mesh is done in screen space however it's easy to bind it to a dummy object and get its transformation when listening to the orxRENDER_EVENT_OBJECT_START.
    That being said, there's also an issue about supporting mesh as first citizen instead of just quads.
    If that's very important for you, were you to choose using orx, I can bump it higher on my todo list (and I'm sure others will take advantage of this as well, including myself ).

    Base on your last reply, I'll do the terrain without splatting for now (just a single sprite). And try the other things. So mesh isn't so important to me, now. But I was also thinking why mesh isn't first class citizen here :) I would welcome the new mesh, of course but, I don't think it should be the priority for me, now. See also below.
    Mmh, that could be interesting, I need to think about it.
    However, even though there isn't currently any direct support for binding more than one graphic (sprite) to an object, it should be doable by using a custom shader. But again, the UVs would then need to be computed inside the fragment shader instead of being interpolated from vertex data (varying).
    I'll try to think about a way to support it natively in order to prevents from having to do those calculations.
    ...
    On the side, I'll think more about how to add support for multi-graphics objects + have the UV passed as vertex data.

    I see. I would welcome this:)
    Well I usually compute the actual coordinates within the shader itself, based on an offset that can either be constant (with careful texture organization) or fed through the shader event.
    This last option breaks batching though, so performances might be affected with hundred of objects. An other way to do it would be to have a given set of offsets and have a single shader variation for each offset. If uniforms aren't modified at runtime, batching will occur naturally (based on the UseCustomParam value).

    That being said, we just released a game a couple of months ago were we do normal mapping inside a fragment shader with values sent through the events for 50-200 sprites every frames and were still able to reach a good framerate, so again, it's something to consider on a per project basis.
    All this will be entirely solved the day I find a satisfying way of supporting custom vertex shaders and their associated vertex payload in a generic data-driven way.
    ...
    Yes, that shouldn't be a problem at all. I can't promise it for the coming weeks as I'm on vacation, but it should happen before you really need it.

    The thing is, that I'm using corner tiles (couldn't find the paper right now) for my terrain. So I have 16 variations of each terrain type (gras, mudd, sand, ...), and then 8 variation for the splats. And the 1 tile is composed from the main texture, secondary texture (in practice, primary and secondary texture are the same huge texture, only texture coordinates are different) and the alpha splat. I would then need to make a lot of combination for the shaders if I'm not mistaken.
    But let just end this problem for now :) The conclusion is, I would try with the simple terrain for now. And then when other things will be working all right and the multi-graphics per object wouldn't be implemented yet, we could discuss it further :)

    Thank you for your time.
  • edited December 2014
    Trigve wrote:
    Base on your last reply, I'll do the terrain without splatting for now (just a single sprite). And try the other things. So mesh isn't so important to me, now. But I was also thinking why mesh isn't first class citizen here :) I would welcome the new mesh, of course but, I don't think it should be the priority for me, now. See also below.

    It's just historical, orx was created in 2001/2002 and was targeting what is now considered very low end hardwares (GBA, NDS, ...). So it was a good ol' traditional 2D sprite/blit based renderer at first and evolve over the years.
    I'll probably do a big cleanup pass in the coming year and try to factorize all the display plugins as much as possible to keep all the common OpenGL code in a single place as it grew quite a lot since the beginning. When it's done, it'll be much faster adding new render-related features for all platforms. Adding mesh shouldn't be very hard as all the ingredients have been added over the years, they simply need to be tied together. :)
    I see. I would welcome this:)

    As soon as custom vertex shaders are supported, this should be pretty easy to add. :)
    The thing is, that I'm using corner tiles (couldn't find the paper right now) for my terrain.

    I remember reading a paper about colored corners vs colored edges for Wang tiles a few years ago, is that the one you are referring to?
    http://graphics.cs.kuleuven.be/publications/LD06AWTCECC/

    I've never used it myself, but it looked interesting.

    As a side note, if you're doing procedural generation, have you read Sean Barrett's take on it? He called it Herringbone Tiles: http://nothings.org/gamedev/herringbone/
    So I have 16 variations of each terrain type (gras, mudd, sand, ...), and then 8 variation for the splats. And the 1 tile is composed from the main texture, secondary texture (in practice, primary and secondary texture are the same huge texture, only texture coordinates are different) and the alpha splat. I would then need to make a lot of combination for the shaders if I'm not mistaken.

    Yes, that would bring a lot of shader permutations (which is usually what happens in commercial games, for Child of Light, even with relatively simple 2D shaders we ended up with ~ 5000 permutations). I'm not saying it's the best approach for every cases, but it's still something to keep in mind. ;)
    But let just end this problem for now :) The conclusion is, I would try with the simple terrain for now. And then when other things will be working all right and the multi-graphics per object wouldn't be implemented yet, we could discuss it further :)

    Sounds very reasonable to me! :)
    Thank you for your time.

    My pleasure! :)
  • edited December 2014
    Ah yes, I forgot to add this: when the time comes for new feature requests, try to have a look at the "issue" list on bitbucket (https://bitbucket.org/orx/orx/issues?status=new&status=open) and if you don't already find what you'd like to see in the engine, you can come and discuss it first either on the forum, the orx-dev group or on IRC.
    Then you can enter a new issue and vote for it.

    The two steps process isn't mandatory, of course, but reading issues that have a long discussion behind it makes it less easy to process when time has come to implement it. ;)

    Also I'm thinking of doing weekly live support using twitch streaming, starting next year, as I think actual examples might be worth thousands of words and theories. :)
  • edited December 2014
    It's just historical, orx was created in 2001/2002 and was targeting what is now considered very low end hardwares (GBA, NDS, ...). So it was a good ol' traditional 2D sprite/blit based renderer at first and evolve over the years.
    I'll probably do a big cleanup pass in the coming year and try to factorize all the display plugins as much as possible to keep all the common OpenGL code in a single place as it grew quite a lot since the beginning. When it's done, it'll be much faster adding new render-related features for all platforms. Adding mesh shouldn't be very hard as all the ingredients have been added over the years, they simply need to be tied together. :)

    Looking forward to it :)

    I remember reading a paper about colored corners vs colored edges for Wang tiles a few years ago, is that the one you are referring to?
    http://graphics.cs.kuleuven.be/publications/LD06AWTCECC/

    Yes this is it.
    I've never used it myself, but it looked interesting.

    As a side note, if you're doing procedural generation, have you read Sean Barrett's take on it? He called it Herringbone Tiles: http://nothings.org/gamedev/herringbone/

    We're not doing procedural terrains but was thinking about generating semi-procedural one for testing (you know it is faster to generate something that should be used only for testing than making it manualy). Will look for the link, thanks.
    Yes, that would bring a lot of shader permutations (which is usually what happens in commercial games, for Child of Light, even with relatively simple 2D shaders we ended up with ~ 5000 permutations). I'm not saying it's the best approach for every cases, but it's still something to keep in mind. ;)
    It isn't problem if you generating shaders somehow, but writing this amount of shader by hand would be a lot work for me I think:)
    Ah yes, I forgot to add this: when the time comes for new feature requests, try to have a look at the "issue" list on bitbucket (bitbucket.org/orx/orx/issues?status=new&status=open) and if you don't already find what you'd like to see in the engine, you can come and discuss it first either on the forum, the orx-dev group or on IRC.
    Then you can enter a new issue and vote for it.

    The two steps process isn't mandatory, of course, but reading issues that have a long discussion behind it makes it less easy to process when time has come to implement it.

    Also I'm thinking of doing weekly live support using twitch streaming, starting next year, as I think actual examples might be worth thousands of words and theories.

    edit: Forgot to say, that I really like the live profiler :)

    Thank you, will look at it.

    By the way, merry christmas :)
Sign In or Register to comment.