I've successfully built and tested this thing. It was created by Enis Bayramoglu (enobayram), who isn't active at the moment. It looks pretty cute, if anyone is interested. All Java code.
I've actually regularly lurked the forums all this time but I'm also glad to be interacting with you again.
Sausage has shown a great deal of love for the animation editor, and came with a nice set of suggestions. I'm trying to implement them as I find the time (and the energy). The toughest (and the coolest) one is to use the orx config module itself to parse the .ini files of existing projects, so that people can use the editor as a drop-in tool. We've been discussing various ways, but one common obstacle is whether there are precompiled binaries for all platforms and all bit sizes (32 and 64). In particular, I couldn't be sure whether there are Win64 binaries. I know Win64 can run 32 bit executables, but an 64 bit java virtual machine can't call 32-bit dlls.
BTW, can orx config write back to the source .ini files? Also, can I query where exactly a configuration value comes from? I mean, say I have a config section X that inherits from section Y and receives the field F from there. Can orx config tell me that F of X comes from Y?
Well I'm glad to learn that you were never far then!
I saw your commits related to Sausage's suggestions but haven't had the opportunity to really look into it yet (probably not before coming back from vacation).
There are no precompiled 64 binaries but simply because I don't have installed a visual studio capable of it yet, there shouldn't be any problem compiling them (after all it works on linux and osx). I can look into it when I come back as I'll probably install the VS 2013 community edition and add all the appropriate binaries, including 64bit.
Orx can write back to the originating .ini file (it's a parameter in the orxConfig_Save() function).
However note that you'll lose any special indentation and comments.
Orx currently can tell you if a value is inherited or not (orxConfig_IsInheritedValue()), but it won't tell you the actual source section. I can easily add such accessor if you need it (the info is available, it's just not exposed).
It'd be great if the Win64 orx.dll were available and up-to-date online. Then the editor could just download the right file based on the currently running installation. I'm planning to access the functionality in the orx binary through Java Native Access, so that I shouldn't have to compile (for all platforms) any native binaries.
Orx can write back to the originating .ini file (it's a parameter in the orxConfig_Save() function).
However note that you'll lose any special indentation and comments.
Will I lose them on the modified line or the entire file?
I can easily add such accessor if you need it (the info is available, it's just not exposed).
That'd be great if it's not too much trouble. I think that information should be exposed to the user of the editor somehow.
I'll try to have the win64 binaries up in a week or so.
As for the .ini file, you don't lose the content, just the comments and indentation, and yes, on the whole file (the whole file is overwritten). If your file doesn't contain any manual modification that shouldn't really matter.
I'll add the accessor next week, when back from vacation. Can you open an issue for that on bitbucket and assign it to me?
As you probably already know, the accessor was added last week.
I added Win64 binaries (and builds to the build machines) yesterday, using the VS2013 setup.
There's no permanent binary online, but if you tell me where you'd like them to be, I can make sure the nightly builds are sent there. The current Win64 nightly build can be fetched here: http://sourceforge.net/projects/orx/files/orx/nightly/orx-dev-vs2013-64-nightly-2015-01-12.zip/download (the link only works for 24h, till the next nightly pass is done).
Hi, first of all, thanks a lot for adding the accessor so fast. It'll be very useful once I get back to the animation editor.
As for the 64-bit binaries, I've started to think that downloading the appropriate orx binary based on the user's environment is probably not a good idea. First of all, the config handling of the editor will be deeply coupled to the orx version that I compile it against, so, downloading at runtime has no functional benefit such as the ability to choose an orx version. Another reason is that, it'll probably be much easier for me to simply pack all 6 binaries (3 platforms x 2 bitnesses) into the editor .jar file.
So, since the binaries will be packed into the .jar at compile time, there isn't much need for automation. I'll add it to the build steps that one should download all 6 precompiled binaries and extract them somewhere so that the build script can collect them and add them to the .jar.
In conclusion, the way they're currently organized in sourceforge is perfectly fine.
BTW, just out of curiosity, is there any reason you're distributing the development bundles separately as vs2012, vs2013 etc.? Since orx is a C library, I'd expect the binaries from different compilers on the same platform to be compatible with each other. Am I missing something?
As for the multiple versions, you'd expect such compatibility but that wasn't always the case, especially between vs2005 and vs2008.
I haven't checked more recent versions but I got in the habit of doing this and it simplifies the building/packaging process as well.
As you know, these days, I'm attempting to call the orxConfig functions from java, so that I can use them to parse the .ini files of existing projects. I'm trying to do that through a tool called JNAerator, which parses your C headers and emits pure java files that can call into your C binary, without requiring you to compile any extra native binaries for the gluing (unlike JNI). In theory, the emitted .java files are platform independent, and they should be able to call the functions from binaries compiled for the currently running platform. In the specific case of Orx, though, I'm worried that this might not work, since Orx uses billions of compile time switches, which in effect makes the headers themselves dependent on the platform.
Anyway, it seems that I've managed to call the orxConfig functions from java on my development machine (linux x64), and I've prepared an orxjnatest.jar file that contains the orx binaries for all the desktop platforms as well as a simple java class that tests the use of relevant orxConfig files.
I've uploaded the orxjnatest jar to this link along with a test.ini file that the test tries to read from. I'd be glad if you could run it on your platform and see if it works. If it does, it will create pop-ups that say: "testval read as float is: 45.0", "testval read as S32 is 45", U32, S64, U64 and String.
Thanks for testing it iarwain! I'm glad it worked on OSX. I've also managed to try it on Win 8 64-bit, and it works there as well. However, just as I was afraid, it failed miserably on Linux 32 bit. That's probably due to the fact that orx changes the function signatures based on platform dependent compile-time switches.
I've not given up on JNAerator yet though, since it works so well when it does. I'll try to preprocess the orx headers, manually defining key symbols to mimic a 32-bit linux environment and then run JNAerator on it, obtaining a 32-bit compatible interface into the orx binary. Then I'll make sure the right interface gets used at runtime. This will probably be tricky since the java signatures will change this time. Still, sounds like a nice challenge
I'll try it on Win7 64-bit tonight as well, but I don't expect any surprise.
Regarding the compile flags, I can modify orxDecl.h so that you could manually override the calling conventions, hence removing some of the differences between platforms. You'd need to compile orx yourself with your own defines for orxFASTCALL, orxSTDCALL and orxCDECL, but that shouldn't be a problem.
Regarding the compile flags, I can modify orxDecl.h so that you could manually override the calling conventions, hence removing some of the differences between platforms. You'd need to compile orx yourself with your own defines for orxFASTCALL, orxSTDCALL and orxCDECL, but that shouldn't be a problem.
Thanks for the offer, but that would defeat the goal of not having to compile any native binaries. You know how native binaries complicate the build system for a cross-platform project. I'll try to run JNAerator on the orx headers, mimicking a Win32 environment through predefined symbols, since that's the most demanding platform. I'll then try to combine the generated 32-bit and 64-bit java interfaces under a more general interface. Is orxDecl.h the only place that influences the function signatures and the calling conventions?
By the way, I have a tangentially related question. The last time I checked the downloads page (probably well over a year ago) I don't remember seeing the Linux binaries. When I saw them this time, I was pleasantly surprised. May I ask what steps you've taken to make sure that the binaries work across distributions? Or DO THEY work across distributions? Did they just work for me because I have a pretty standard disribution (i.e. Ubuntu 14.04)?
I think that if you use ABI compactible binaries it would work on most linux distributions (of course if you've all the dependencies installed).
Thanks for the answer, but I wonder how you make sure that they are ABI compatible, the Linux64 orx binary is just 4.2MB, and it's probably relying on a lot of shared objects being on the load path. Some of these OS dependencies may or may not be ABI compatible across distros, and some of them might guarantee a certain subset of its API to be ABI compatible. You can, for instance specially prepare your binaries to remove symbols found in the newer versions of glibc, to be able to run them on very old systems. I'm curious about the steps iarwain thought would be enough to support a reasonable range of distros.
Mmh, weird, the linux libraries should have been there, even a year ago. They might have been released a day or two later than other platforms (as the whole process took me 6h of work), but in the end everything should have been available.
I do not do anything at orx's level really. However, when releasing Little Cells, in addition to a small script that would create a dekstop entry and select the correct architecture between x86 & x64, I'd package the extra dependencies as well:
- libstdc++
- libsndfile
- libopenal
- libgcc_s
I've tried to get the JNAerator based Java-Orx interface for a while, but I could never get it to work properly on 32-bit systems. So I've decided to either (1) compile orx with emscripten to Javascript and run the result inside the Java's standard javascript interpreter or (2) use SWIG to generate interface code. After seeing that knolan's orxEditor also needs similar bindings (for python), I've thought maybe it's best to instead go with SWIG and generate bindings for any language we like.
If we decide to do it this way, I'd need to get the SWIG generated C/C++ sources compiled for orx's targets (only the desktop ones initially). Are you still willing to use the build server for this purpose?
If so, how would you like to proceed? In the end, we'll have a SWIG interface definition file and a script to generate all the sources for the bindings. The generation needs to be done once, but the sources need to be compiled for each platform.
For example; for python, SWIG will generate:
* orx_python_bindings.cxx
* orx.py
Then we need to compile orx_python_bindings.cxx for each platform, and package and distribute the binaries along with orx.py.
I've tried to get the JNAerator based Java-Orx interface for a while, but I could never get it to work properly on 32-bit systems. So I've decided to either (1) compile orx with emscripten to Javascript and run the result inside the Java's standard javascript interpreter or (2) use SWIG to generate interface code. After seeing that knolan's orxEditor also needs similar bindings (for python), I've thought maybe it's best to instead go with SWIG and generate bindings for any language we like.
If we decide to do it this way, I'd need to get the SWIG generated C/C++ sources compiled for orx's targets (only the desktop ones initially). Are you still willing to use the build server for this purpose?
Of course!
If so, how would you like to proceed? In the end, we'll have a SWIG interface definition file and a script to generate all the sources for the bindings. The generation needs to be done once, but the sources need to be compiled for each platform.
For example; for python, SWIG will generate:
* orx_python_bindings.cxx
* orx.py
Then we need to compile orx_python_bindings.cxx for each platform, and package and distribute the binaries along with orx.py.
That's an excellent question. I'd love to package and distribute wrappers, and not only for Python, but I haven't given much thoughts about package naming nor where the SWIG scripts should be in the hierarchy. Maybe under the code/build folder? I have 0 experience with SWIG, so I don't know if there's any requirement at this level.
I'll be happy to modify the buildbot script once we have something working.
Wow, SWIG has been so cooperative as usual! I've already completed the config bindings for Java and Python. The interface is not very natural to the target languages ATM, as in you need to write C-like code:
Python example:
v = orxVECTOR()
orxConfig_GetVector("key",v)
while it would have been much nicer (and easily accomplished with SWIG) to write:
v = orxConfig_GetVector("key")
But at least, SWIG has done all the grunt-work of crossing the language borders. I guess we could improve the bindings for each language over time, but we have something to work with for now.
BTW, the current SWIG interface description file is so simple that it could be used for any of the other languages that SWIG supports (including clisp, csharp, d, go, lua, ocaml, php, ruby and others).
Creating bindings for the rest of Orx will probably take a bit longer, since exposing the callback registration to the target language is somewhat more tricky, but it's no big deal.
iarwain wrote:
Of course!
Great!
That's an excellent question. I'd love to package and distribute wrappers, and not only for Python, but I haven't given much thoughts about package naming nor where the SWIG scripts should be in the hierarchy. Maybe under the code/build folder? I have 0 experience with SWIG, so I don't know if there's any requirement at this level.
I guess I know one half of the equation, and you know the other, so, let's discover together shall we
/
CMakeLists.txt
orx.i # The swig interface definition file
test.ini # a small ini file for test purposes
test.py # a small python test script
cmake/
FindORX.cmake # A cmake find file to find ORX (used by CMakeLists.txt)
build/
orxPYTHON_wrap.cxx # The python wrapper generated by SWIG
orxJAVA_wrap.cxx # The java wrapper generated by SWIG
orx.py # The python module generated by SWIG
*.java # The java class files for the SWIG generated java module
As you might have noticed, I've included some files from my cmake build folder "build" in case you'd like to try it without installing cmake or SWIG, but here's how you'd do it from scratch:
mkdir build
cd build
cmake .. -DGENERATE_PYTHON_BINDINGS=TRUE
-DGENERATE_JAVA_BINDINGS=TRUE
-DORX_DIR=<path_to_orx_root_folder>
make
python ../test.py
If you'd like to just try the pre-generated sources I've sent you, please compile the .cxx files using something similar to how cmake does it (the names of the binaries are important etc.):
I'll be happy to modify the buildbot script once we have something working.
My notes about the buildbot script:
1. I think SWIG should only be run in one place, and the generated .cxx files should be compiled on all the build slaves. Running SWIG on different computers runs the risk of generating slightly different interfaces, which will be a big problem since the users of the binding must see a single cross-platform, say, .py file.
2. We should gather all the compiled binaries and package them into a single library for the target language. I've tried this for Java and it works quite nicely. In the end you get an innocent looking .jar that contains everything for every platform. Naturally, this step will be quite language-dependent.
I did try to play around a bit with SWIG at about the same time, two days ago, but it was my first contact with it so I had a very blunt approach.
Here's the .i I wrote, which contains some windows-specific defines that I thought would be given to the command line instead (as well as the inclusion of windows.i, which should be conditional upon said defines).
There's no langage-specific idiom, but, aside from some warnings, it looked like it was able to generate valid wrappers for the languages I tried (python, lua and go).
I was also thinking of excluding all the "private" API in orx, in all the .h files, from __orxEXTERN__ to help with the process.
Now regarding the build steps you mentioned:
1- if we want to generate the wrappers on a single build machine, they'll have to be part of hg repository and generated everytime the headers change, very similar to the way the doxygen doc is currently maintained.
2- this is a bit more problematic as build machines are not up all the time (the OSX/iOS ones are actually almost never up) and doing such inter-dependencies in buildbot is actually rather tricky, albeit not unfeasible. Also, when you say:
We should gather all the compiled binaries and package them into a single library for the target language.
How do you package windows/osx/linux binaries into a single library? Into a single package, I could see, but into a single library, I'm not sure how it works.
A first step could be to have separate packages per target architecture (ie. windows/osx/linux for all the languages), like it's apparently done by some other libraries (I just checked SFML and that's the approach they've taken)?
Here's the .i I wrote, which contains some windows-specific defines that I thought would be given to the command line instead (as well as the inclusion of windows.i, which should be conditional upon said defines).
Nice try for a first attempt Even though SWIG is quite smart, it still requires some hand-holding. For instance, it's smart enough to wrap a (char *) as a string in the target language, but it really doesn't know what to make of a char **. That's why I have the bit that that goes:
Because SWIG knows what to do with a vector<string> (thanks to %include"std_vector.i")
Also, I've been reluctant to show it orxDecl.h and orxType.h directly, as that, in my mind, runs the risk of leaking something platform dependent to the generated wrappers. I instead want it to use the broadest types in the wrappers by lying to it about #defines such as orxFLOAT. In the end, the C compiler will see the true #defines for each platform, and compile the wrappers correctly.
By the way, why did you need to include windows.i? In general, I think we need to keep the .i files completely platform-independent. Think about this: almost all the target languages are platform agnostic. So, f.x. if we're going to generate Python bindings, those bindings should work exactly the same way on all the platforms. In the end, the user's Python codebase will see a single orx.py, and it shouldn't matter which platform was used to generate that file. A single orx.py, multiple _orx.{so,dll,dylib}s. The only way to make that work, is to generate the wrappers on a single platform, and compile the very same generated .cxx on all the platforms (and make sure that it does compile).
There's no langage-specific idiom, but, aside from some warnings, it looked like it was able to generate valid wrappers for the languages I tried (python, lua and go).
Staying language-agnostic has the huge benefit of being able to generate bindings for any language, we can also, in time, focus on some languages and make the bindings more natural via conditionally included interface code.
I was also thinking of excluding all the "private" API in orx, in all the .h files, from __orxEXTERN__ to help with the process.
Can you give a specific example of a function you'd like excluded? One option is to %ignore them individually, but if we can state a pattern in the function signature, we might also be able to ignore them all at once.
1- if we want to generate the wrappers on a single build machine, they'll have to be part of hg repository and generated everytime the headers change, very similar to the way the doxygen doc is currently maintained.
2- this is a bit more problematic as build machines are not up all the time (the OSX/iOS ones are actually almost never up) and doing such inter-dependencies in buildbot is actually rather tricky, albeit not unfeasible.
So we have two challenges; getting the wrapper.cxxs into the build machines, and getting the binaries out of them. I guess we can manage the getting in bit, by making one build machine upload the wrappers to a common repository and the others pulling from there. Another option could be to let each of them run SWIG independently, while making sure that they generate the exact same wrappers.
How would you feel about handling the getting out bit "manually". I mean, it could be triggered manually for each release, once we know that all the slaves have uploaded their binaries.
How do you package windows/osx/linux binaries into a single library? Into a single package, I could see, but into a single library, I'm not sure how it works.
A first step could be to have separate packages per target architecture (ie. windows/osx/linux for all the languages), like it's apparently done by some other libraries (I just checked SFML and that's the approach they've taken)?
Sorry, by a "single library", I meant a single package. Or leaving terms aside, I'd want a single entity, that works across platforms. I've just checked SFML, and they indeed have separate packages per architecture (which I dislike) for Python. On the other hand, their Java bindings are more like what I'd prefer. A single .jar file that contains the binaries for all the platforms.
IMHO, providing separate packages could be inconvenient for the users of the binding. For instance, if I make a game in Python, I'd like my users to just download the game and play, without needing to install a library for their platform. I actually don't know if people distribute python programs this way, so, it may be irrelevant for python, but in Java, downloading a single .jar and running it by double-clicking on it is common practice.
Nice try for a first attempt Even though SWIG is quite smart, it still requires some hand-holding. For instance, it's smart enough to wrap a (char *) as a string in the target language, but it really doesn't know what to make of a char **. That's why I have the bit that that goes:
Ah, I see. It sounds weird to me that it can easily convert char * but has trouble handling char**. It's nitpicking, but you might want to do a reserve() before all the push_backs.
Also, I've been reluctant to show it orxDecl.h and orxType.h directly, as that, in my mind, runs the risk of leaking something platform dependent to the generated wrappers. I instead want it to use the broadest types in the wrappers by lying to it about #defines such as orxFLOAT. In the end, the C compiler will see the true #defines for each platform, and compile the wrappers correctly.
Mmh, which parts concern you precisely?
By the way, why did you need to include windows.i? In general, I think we need to keep the .i files completely platform-independent. Think about this: almost all the target languages are platform agnostic. So, f.x. if we're going to generate Python bindings, those bindings should work exactly the same way on all the platforms. In the end, the user's Python codebase will see a single orx.py, and it shouldn't matter which platform was used to generate that file. A single orx.py, multiple _orx.{so,dll,dylib}s. The only way to make that work, is to generate the wrappers on a single platform, and compile the very same generated .cxx on all the platforms (and make sure that it does compile).
Windows.i allows SWIG to gracefully handle all the calling convention, declspec() tags, etc...
I was thinking of having its inclusion conditional to the __orxWINDOWS__ define. But if you'd rather redefine all the relevant content manually, I don't see any problem with that either.
Staying language-agnostic has the huge benefit of being able to generate bindings for any language, we can also, in time, focus on some languages and make the bindings more natural via conditionally included interface code.
I do think supporting targeted languages idioms will prove beneficial for the end users, when we can. That being said, I'm not the target audience as it's unlikely I'm going to use any of those bindings myself.
Can you give a specific example of a function you'd like excluded? One option is to %ignore them individually, but if we can state a pattern in the function signature, we might also be able to ignore them all at once.
Well doing it via __orxEXTERN__ is also beneficial to the users using the C/C++ includes directly, not just for the wrappers. Things like orx<Module>_Setup/_Init/_Exit are good candidates for that, their intent is definitely to be private, not public.
So we have two challenges; getting the wrapper.cxxs into the build machines, and getting the binaries out of them. I guess we can manage the getting in bit, by making one build machine upload the wrappers to a common repository and the others pulling from there. Another option could be to let each of them run SWIG independently, while making sure that they generate the exact same wrappers.
I see no problem with storing the wrappers directly with the source itself, on the same repository.
Sorry, by a "single library", I meant a single package. Or leaving terms aside, I'd want a single entity, that works across platforms. I've just checked SFML, and they indeed have separate packages per architecture (which I dislike) for Python. On the other hand, their Java bindings are more like what I'd prefer. A single .jar file that contains the binaries for all the platforms.
The separate architecture could be a first step though, as it's easier to put together.
IMHO, providing separate packages could be inconvenient for the users of the binding. For instance, if I make a game in Python, I'd like my users to just download the game and play, without needing to install a library for their platform. I actually don't know if people distribute python programs this way, so, it may be irrelevant for python, but in Java, downloading a single .jar and running it by double-clicking on it is common practice.
That part can always be done by the developer themselves: they could get all the versions and ship whichever combination they want to their end user. Like the current linux32/64 packages at the moment: they are separate .zip files, but usually people making games will retrieve both and ship both versions with their game. It is an extra step for the developer, but only once (when they retrieve the package) and it could simplify the package generation at least in a first time.
Regarding the build slave, if you look at code/build/buildbot/install.txt, all the relevant steps should be there. Lemme know if you have any issues.
In your case, the slave would be named orx-mac-slave-enobayram and the password would be: pallas.
Ah, I see. It sounds weird to me that it can easily convert char * but has trouble handling char**. It's nitpicking, but you might want to do a reserve() before all the push_backs.
Well, a char * could mean many things, but in the vast majority of the cases, it points to a null-terminated string, so SWIG takes liberty in assuming it as such. Besides a char * is all you need to properly access a null-terminated string. For a char ** though, is it a null-terminated list of null-terminated strings? Is it the address of a pointer to a single null-terminated string? Or is it what it is in this case? So, SWIG doesn't attempt anything fancy when it sees a char ** by default. You can make it wrap a char ** however you wish with some SWIG-fu(you can define typemaps which tell how to map types), but in this case, i didn't think it was worth it for a single function.
As for the "reserve", I agree, it's such an easy and harmless optimization that there's no excuse not to do it here, aside from that, this code is going to talk to Python , besides, we're constructing an extra vector<string> in the first place, and I don't think it's at all possible to avoid that, since most of the target languages keep their strings as unicode, and worse, they're not even null-terminated.
In general, I really ignore most performance considerations while writing language bindings, since the activity is excessively wasteful to begin with.
Mmh, which parts concern you precisely?
Well, I'm probably not as comfortable with the orx codebase as you are, so whenever I see a platform-specific #define I'm afraid that it'll cause SWIG to emit different wrappers on each platform. Besides, I'd prefer to have complete control over how SWIG wraps, say, orxFLOAT, so that the wrappers work correctly and similarly on each platform.
Windows.i allows SWIG to gracefully handle all the calling convention, declspec() tags, etc...
I was thinking of having its inclusion conditional to the __orxWINDOWS__ define. But if you'd rather redefine all the relevant content manually, I don't see any problem with that either.
I see, I guess Windows.i would be essential in a codebase that has declspecs and such all around the codebase, but thanks to your consistent use of macros, we should be able to avoid that problem without it. As I said, I'd prefer to stay away from anything that implies a platform dependency, so a "#define orxFASTCALL // empty" feels much more innocent since we know it'll work the same way on all platforms.
Well doing it via __orxEXTERN__ is also beneficial to the users using the C/C++ includes directly, not just for the wrappers. Things like orx<Module>_Setup/_Init/_Exit are good candidates for that, their intent is definitely to be private, not public.
Ah, case in point, if those are private functions, what's the officially recommended way of using the config module in isolation? I've discovered that you need to call the following first:
I see no problem with storing the wrappers directly with the source itself, on the same repository.
So you mean, you'd prefer to keep the generated code in repository? That really does greatly simplify the getting in problem
The separate architecture could be a first step though, as it's easier to put together.
...
That part can always be done by the developer themselves...
I definitely agree, I hate it when I unnecessarily complicate things . As Einstein said “A clever person solves a problem. A wise person avoids it.”(Conclusion: I'm definitely not wise). We could even manually upload the cross-platform binding packages for chosen releases, no need to complicate the build setup.
Regarding the build slave, if you look at code/build/buildbot/install.txt, all the relevant steps should be there. Lemme know if you have any issues.
In your case, the slave would be named orx-mac-slave-enobayram and the password would be: pallas.
Comments
Do you think you could upload the .jar somewhere? I'll add it to the download page of the bitbucket project.
https://bitbucket.org/orx/animationeditor/downloads/OrxAnimationEditor.jar
Wow. Some decent effort went into this.
enobayram, I have a list of ideas for you that would make this editor much more flexible for working with existing projects. I'll pop you a PM.
I think it really deserves some love.
Sausage has shown a great deal of love for the animation editor, and came with a nice set of suggestions. I'm trying to implement them as I find the time (and the energy). The toughest (and the coolest) one is to use the orx config module itself to parse the .ini files of existing projects, so that people can use the editor as a drop-in tool. We've been discussing various ways, but one common obstacle is whether there are precompiled binaries for all platforms and all bit sizes (32 and 64). In particular, I couldn't be sure whether there are Win64 binaries. I know Win64 can run 32 bit executables, but an 64 bit java virtual machine can't call 32-bit dlls.
BTW, can orx config write back to the source .ini files? Also, can I query where exactly a configuration value comes from? I mean, say I have a config section X that inherits from section Y and receives the field F from there. Can orx config tell me that F of X comes from Y?
Cheers!
I saw your commits related to Sausage's suggestions but haven't had the opportunity to really look into it yet (probably not before coming back from vacation).
There are no precompiled 64 binaries but simply because I don't have installed a visual studio capable of it yet, there shouldn't be any problem compiling them (after all it works on linux and osx). I can look into it when I come back as I'll probably install the VS 2013 community edition and add all the appropriate binaries, including 64bit.
Orx can write back to the originating .ini file (it's a parameter in the orxConfig_Save() function).
However note that you'll lose any special indentation and comments.
Orx currently can tell you if a value is inherited or not (orxConfig_IsInheritedValue()), but it won't tell you the actual source section. I can easily add such accessor if you need it (the info is available, it's just not exposed).
Cheers!
iarwain
It'd be great if the Win64 orx.dll were available and up-to-date online. Then the editor could just download the right file based on the currently running installation. I'm planning to access the functionality in the orx binary through Java Native Access, so that I shouldn't have to compile (for all platforms) any native binaries.
Will I lose them on the modified line or the entire file?
That'd be great if it's not too much trouble. I think that information should be exposed to the user of the editor somehow.
Cheers!
As for the .ini file, you don't lose the content, just the comments and indentation, and yes, on the whole file (the whole file is overwritten). If your file doesn't contain any manual modification that shouldn't really matter.
I'll add the accessor next week, when back from vacation. Can you open an issue for that on bitbucket and assign it to me?
I added Win64 binaries (and builds to the build machines) yesterday, using the VS2013 setup.
There's no permanent binary online, but if you tell me where you'd like them to be, I can make sure the nightly builds are sent there. The current Win64 nightly build can be fetched here: http://sourceforge.net/projects/orx/files/orx/nightly/orx-dev-vs2013-64-nightly-2015-01-12.zip/download (the link only works for 24h, till the next nightly pass is done).
As for the 64-bit binaries, I've started to think that downloading the appropriate orx binary based on the user's environment is probably not a good idea. First of all, the config handling of the editor will be deeply coupled to the orx version that I compile it against, so, downloading at runtime has no functional benefit such as the ability to choose an orx version. Another reason is that, it'll probably be much easier for me to simply pack all 6 binaries (3 platforms x 2 bitnesses) into the editor .jar file.
So, since the binaries will be packed into the .jar at compile time, there isn't much need for automation. I'll add it to the build steps that one should download all 6 precompiled binaries and extract them somewhere so that the build script can collect them and add them to the .jar.
In conclusion, the way they're currently organized in sourceforge is perfectly fine.
BTW, just out of curiosity, is there any reason you're distributing the development bundles separately as vs2012, vs2013 etc.? Since orx is a C library, I'd expect the binaries from different compilers on the same platform to be compatible with each other. Am I missing something?
As for the multiple versions, you'd expect such compatibility but that wasn't always the case, especially between vs2005 and vs2008.
I haven't checked more recent versions but I got in the habit of doing this and it simplifies the building/packaging process as well.
The Orx Animation Editor needs YOU!
As you know, these days, I'm attempting to call the orxConfig functions from java, so that I can use them to parse the .ini files of existing projects. I'm trying to do that through a tool called JNAerator, which parses your C headers and emits pure java files that can call into your C binary, without requiring you to compile any extra native binaries for the gluing (unlike JNI). In theory, the emitted .java files are platform independent, and they should be able to call the functions from binaries compiled for the currently running platform. In the specific case of Orx, though, I'm worried that this might not work, since Orx uses billions of compile time switches, which in effect makes the headers themselves dependent on the platform.
Anyway, it seems that I've managed to call the orxConfig functions from java on my development machine (linux x64), and I've prepared an orxjnatest.jar file that contains the orx binaries for all the desktop platforms as well as a simple java class that tests the use of relevant orxConfig files.
I've uploaded the orxjnatest jar to this link along with a test.ini file that the test tries to read from. I'd be glad if you could run it on your platform and see if it works. If it does, it will create pop-ups that say: "testval read as float is: 45.0", "testval read as S32 is 45", U32, S64, U64 and String.
Thanks!
I've not given up on JNAerator yet though, since it works so well when it does. I'll try to preprocess the orx headers, manually defining key symbols to mimic a 32-bit linux environment and then run JNAerator on it, obtaining a 32-bit compatible interface into the orx binary. Then I'll make sure the right interface gets used at runtime. This will probably be tricky since the java signatures will change this time. Still, sounds like a nice challenge
Regarding the compile flags, I can modify orxDecl.h so that you could manually override the calling conventions, hence removing some of the differences between platforms. You'd need to compile orx yourself with your own defines for orxFASTCALL, orxSTDCALL and orxCDECL, but that shouldn't be a problem.
Thanks for the offer, but that would defeat the goal of not having to compile any native binaries. You know how native binaries complicate the build system for a cross-platform project. I'll try to run JNAerator on the orx headers, mimicking a Win32 environment through predefined symbols, since that's the most demanding platform. I'll then try to combine the generated 32-bit and 64-bit java interfaces under a more general interface. Is orxDecl.h the only place that influences the function signatures and the calling conventions?
As for compiling the binaries, we could use orx's build machines to provide them to you if need be.
I do not do anything at orx's level really. However, when releasing Little Cells, in addition to a small script that would create a dekstop entry and select the correct architecture between x86 & x64, I'd package the extra dependencies as well:
- libstdc++
- libsndfile
- libopenal
- libgcc_s
That's about it.
I've tried to get the JNAerator based Java-Orx interface for a while, but I could never get it to work properly on 32-bit systems. So I've decided to either (1) compile orx with emscripten to Javascript and run the result inside the Java's standard javascript interpreter or (2) use SWIG to generate interface code. After seeing that knolan's orxEditor also needs similar bindings (for python), I've thought maybe it's best to instead go with SWIG and generate bindings for any language we like.
If we decide to do it this way, I'd need to get the SWIG generated C/C++ sources compiled for orx's targets (only the desktop ones initially). Are you still willing to use the build server for this purpose?
If so, how would you like to proceed? In the end, we'll have a SWIG interface definition file and a script to generate all the sources for the bindings. The generation needs to be done once, but the sources need to be compiled for each platform.
For example; for python, SWIG will generate:
* orx_python_bindings.cxx
* orx.py
Then we need to compile orx_python_bindings.cxx for each platform, and package and distribute the binaries along with orx.py.
Hi Enobayram!
Of course!
That's an excellent question. I'd love to package and distribute wrappers, and not only for Python, but I haven't given much thoughts about package naming nor where the SWIG scripts should be in the hierarchy. Maybe under the code/build folder? I have 0 experience with SWIG, so I don't know if there's any requirement at this level.
I'll be happy to modify the buildbot script once we have something working.
Python example:
while it would have been much nicer (and easily accomplished with SWIG) to write:
But at least, SWIG has done all the grunt-work of crossing the language borders. I guess we could improve the bindings for each language over time, but we have something to work with for now.
BTW, the current SWIG interface description file is so simple that it could be used for any of the other languages that SWIG supports (including clisp, csharp, d, go, lua, ocaml, php, ruby and others).
Creating bindings for the rest of Orx will probably take a bit longer, since exposing the callback registration to the target language is somewhat more tricky, but it's no big deal.
iarwain wrote: Great!
I guess I know one half of the equation, and you know the other, so, let's discover together shall we
I've attached a zip https://forum.orx-project.org/uploads/legacy/fbfiles/files/orxbinding.zip containing the following files:
As you might have noticed, I've included some files from my cmake build folder "build" in case you'd like to try it without installing cmake or SWIG, but here's how you'd do it from scratch:
If you'd like to just try the pre-generated sources I've sent you, please compile the .cxx files using something similar to how cmake does it (the names of the binaries are important etc.):
For Python:
My notes about the buildbot script:
1. I think SWIG should only be run in one place, and the generated .cxx files should be compiled on all the build slaves. Running SWIG on different computers runs the risk of generating slightly different interfaces, which will be a big problem since the users of the binding must see a single cross-platform, say, .py file.
2. We should gather all the compiled binaries and package them into a single library for the target language. I've tried this for Java and it works quite nicely. In the end you get an innocent looking .jar that contains everything for every platform. Naturally, this step will be quite language-dependent.
I did try to play around a bit with SWIG at about the same time, two days ago, but it was my first contact with it so I had a very blunt approach.
Here's the .i I wrote, which contains some windows-specific defines that I thought would be given to the command line instead (as well as the inclusion of windows.i, which should be conditional upon said defines).
https://forum.orx-project.org/uploads/legacy/fbfiles/files/orx-a5be451010123579863dcf5e8f8c1664.zip
There's no langage-specific idiom, but, aside from some warnings, it looked like it was able to generate valid wrappers for the languages I tried (python, lua and go).
I was also thinking of excluding all the "private" API in orx, in all the .h files, from __orxEXTERN__ to help with the process.
Now regarding the build steps you mentioned:
1- if we want to generate the wrappers on a single build machine, they'll have to be part of hg repository and generated everytime the headers change, very similar to the way the doxygen doc is currently maintained.
2- this is a bit more problematic as build machines are not up all the time (the OSX/iOS ones are actually almost never up) and doing such inter-dependencies in buildbot is actually rather tricky, albeit not unfeasible. Also, when you say: How do you package windows/osx/linux binaries into a single library? Into a single package, I could see, but into a single library, I'm not sure how it works.
A first step could be to have separate packages per target architecture (ie. windows/osx/linux for all the languages), like it's apparently done by some other libraries (I just checked SFML and that's the approach they've taken)?
Nice try for a first attempt
Because SWIG knows what to do with a vector<string> (thanks to %include"std_vector.i")
Also, I've been reluctant to show it orxDecl.h and orxType.h directly, as that, in my mind, runs the risk of leaking something platform dependent to the generated wrappers. I instead want it to use the broadest types in the wrappers by lying to it about #defines such as orxFLOAT. In the end, the C compiler will see the true #defines for each platform, and compile the wrappers correctly.
By the way, why did you need to include windows.i? In general, I think we need to keep the .i files completely platform-independent. Think about this: almost all the target languages are platform agnostic. So, f.x. if we're going to generate Python bindings, those bindings should work exactly the same way on all the platforms. In the end, the user's Python codebase will see a single orx.py, and it shouldn't matter which platform was used to generate that file. A single orx.py, multiple _orx.{so,dll,dylib}s. The only way to make that work, is to generate the wrappers on a single platform, and compile the very same generated .cxx on all the platforms (and make sure that it does compile).
Staying language-agnostic has the huge benefit of being able to generate bindings for any language, we can also, in time, focus on some languages and make the bindings more natural via conditionally included interface code.
Can you give a specific example of a function you'd like excluded? One option is to %ignore them individually, but if we can state a pattern in the function signature, we might also be able to ignore them all at once.
So we have two challenges; getting the wrapper.cxxs into the build machines, and getting the binaries out of them. I guess we can manage the getting in bit, by making one build machine upload the wrappers to a common repository and the others pulling from there. Another option could be to let each of them run SWIG independently, while making sure that they generate the exact same wrappers.
How would you feel about handling the getting out bit "manually". I mean, it could be triggered manually for each release, once we know that all the slaves have uploaded their binaries.
Sorry, by a "single library", I meant a single package. Or leaving terms aside, I'd want a single entity, that works across platforms. I've just checked SFML, and they indeed have separate packages per architecture (which I dislike) for Python. On the other hand, their Java bindings are more like what I'd prefer. A single .jar file that contains the binaries for all the platforms.
IMHO, providing separate packages could be inconvenient for the users of the binding. For instance, if I make a game in Python, I'd like my users to just download the game and play, without needing to install a library for their platform. I actually don't know if people distribute python programs this way, so, it may be irrelevant for python, but in Java, downloading a single .jar and running it by double-clicking on it is common practice.
Mmh, which parts concern you precisely?
Windows.i allows SWIG to gracefully handle all the calling convention, declspec() tags, etc...
I was thinking of having its inclusion conditional to the __orxWINDOWS__ define. But if you'd rather redefine all the relevant content manually, I don't see any problem with that either.
I do think supporting targeted languages idioms will prove beneficial for the end users, when we can. That being said, I'm not the target audience as it's unlikely I'm going to use any of those bindings myself.
Well doing it via __orxEXTERN__ is also beneficial to the users using the C/C++ includes directly, not just for the wrappers. Things like orx<Module>_Setup/_Init/_Exit are good candidates for that, their intent is definitely to be private, not public.
I see no problem with storing the wrappers directly with the source itself, on the same repository.
The separate architecture could be a first step though, as it's easier to put together.
That part can always be done by the developer themselves: they could get all the versions and ship whichever combination they want to their end user. Like the current linux32/64 packages at the moment: they are separate .zip files, but usually people making games will retrieve both and ship both versions with their game. It is an extra step for the developer, but only once (when they retrieve the package) and it could simplify the package generation at least in a first time.
Regarding the build slave, if you look at code/build/buildbot/install.txt, all the relevant steps should be there. Lemme know if you have any issues.
In your case, the slave would be named orx-mac-slave-enobayram and the password would be: pallas.
As for the "reserve", I agree, it's such an easy and harmless optimization that there's no excuse not to do it here, aside from that, this code is going to talk to Python
In general, I really ignore most performance considerations while writing language bindings, since the activity is excessively wasteful to begin with.
Well, I'm probably not as comfortable with the orx codebase as you are, so whenever I see a platform-specific #define I'm afraid that it'll cause SWIG to emit different wrappers on each platform. Besides, I'd prefer to have complete control over how SWIG wraps, say, orxFLOAT, so that the wrappers work correctly and similarly on each platform.
I see, I guess Windows.i would be essential in a codebase that has declspecs and such all around the codebase, but thanks to your consistent use of macros, we should be able to avoid that problem without it. As I said, I'd prefer to stay away from anything that implies a platform dependency, so a "#define orxFASTCALL // empty" feels much more innocent since we know it'll work the same way on all platforms.
Ah, case in point, if those are private functions, what's the officially recommended way of using the config module in isolation? I've discovered that you need to call the following first:
So you mean, you'd prefer to keep the generated code in repository? That really does greatly simplify the getting in problem
I definitely agree, I hate it when I unnecessarily complicate things
Great, I'll set it up as soon as possible.