Optimisation of GLES update times
I have been working on a python port of OpenGLES for pythonista here.
Rendering is fine usually less than 0.005 seconds however the update time while it shows as less than 0.02 seconds on most occasions, visually it does not. I know where the bottle neck is in terms of CPU cycles and so on... However I don't really know how to optimises this (on iOS). My initial thoughts was to pass allot of the functions to a background thread as show on line 320 of this file. However this did not help.
So really what I am asking is. Is there anyway I can optimise what I have to work with the JS to PY bridge or is it better off just spending the time to write my own physics library (which I really don't believe I could do)
Also... are you passing entire objects back and forth, or just object 6dof?
Another option migh be to use the ios spritekit for the physics. that involves writing your own bindings, but it seems like you are a pro now :)
I updated the repo to reflect the work, including a new timer system for the JS components alone. It looks like it only to 0.003s to get the position and 0.003s to get rotation. With time being lost in stepping the simulation at 0.015s
I am only sending a single integer id of the position of the body in a JS stored list, the only time a information is bigger than that is to get the location vec (3 float) and the rotation quat (4 floats)
Thank you for the point out of Ammo not actually being optimised for JS I was not aware. I had actually looked at using cannon.js originally but did not at the time because I was still writing the bindings for the PY to JS and it didn't work (However looking at the code now I know that was more my fault for incorrect code, than anything else...)
I think the only reason I would not use SpriteKit physics engine is because it is for 2D not 3D otherwise that would probably be the better option as it is designed for iOS
Thanks for the help
sorry i meant to write SceneKit not spritekit. I have not really read much about it, not sure how easy this would be to use, or if it could even be used as the backend with rendering.
All good, that makes a bit more sense though. I will look into it as a backup option.
My understanding of it though would be if I was to use SceneKit physics then I would use all of SceneKit and ditch the OpenGLES port. (Or at least push it to the background anyway...)
Either way it is a possibility and I will consider looking into it.
Thanks for the infomation
I have just downloaded and played with the repo and started looking at the code but I can't understand how it all works yet. When you have a few spare cycles, could you explain how this architecture works? It would also be useful to know what areas of the code have been instrumented and how that all works.
If by explain the architecture you mean just work my way down the folder structure explaining what everything is for I would be willing to do so. Otherwise you might need to explain a bit more into what you want to know (just so I don't start going off on an unrelated tangent :/ ). Once I have a structure that I am happy with I will go through and properly document all of the code.
Just a heads up on my local copy I am reworking the physics component, to use CannonJS as was one of @JonB's suggestions. Initial tests show that it will be a little faster. The only issue is that if I make the update loops separate (one for physics (in the JS environment) and one for all python stuff) it could difficult to keep synced. However I will not be updating the public repo unless there is either an issue I can't resolve or I get it working.
I am glad that there is at least a few people who are interested in this project. (I just hope I haven't taken more than I can handle...)
By architecture I am referring to how the major components are designed and how they communicate with each other. What are the major moving parts.
I have absolutely been watching your project evolve and noticed that you had written some simple tools to translate C header files into bindings. This could be very useful to others who are working on other frameworks. The whole work breakdown is pretty interesting.
Please not that I have updated the repo to reflect this.
There is really two major parts to this each with sub components to help.
The OpenGLES side has the
GLKitpackage to handle the generation of a GLKView,
EAGLto handle context creation and
GLESwhich has a sub package
headerswhich is all the boiler plate code for OpenGLES. For
EAGLthere is still more to be done with the goal of completly implimenting it and not just doing enough to make a GLKView. For
GLESI will eventually redo the structure so that the contents of
headersis directly imported i.e instead of
from OpenGLES.GLES import gles1do
from OpenGLES.GLES import (gl, glext, glplatform)it might be more work to the end developer but if glext and glplatform are not required then there is no point importing it.
Utilis a collection of utilities and helpers for both the GLKView and for rendering objects.
Physics is in the subpackage
Util/Physicscould really be a project by its self as all it does is handle the Bullet Physics engine and has nothing to do with rendering.
In terms of how it is all designed from my point of view because there was no prior planning it isn't. In some areas this has sort of come back to bite me and in others I have been lucky. (This is a major learning point for me as it is the bigest project I have done and I am really making use of it to see what I should do differently if there is ever a next time.
The modelling is done in python mainly under
Util/Model.pyand again any object that needs rendering is done
I will look into both documenting my code for the next commit and creating a wiki page to show the structure.
The tools while being written for the purpose of translating OpenGL / OpenGLES header files could easily be modified to support any header. I was at one point attempting to write a precompiler sort of style for it so it would pay attention to the #if #ifelse and #else statments however had to stop as I could not get it to work how I wanted. But I will definitally look into it again.
Got the latest repo and see the update to using cannon.js. If you run the test in Util/Physics/init.py there is some kind of background activity running that starts logging BULLET errors. It might be good to add some code there to shut down webview after the timing test. This seems to show that Cannon.js is running its own timer based routine even when you are not driving the simulation from Python code. The Canno engine seems significantly faster then Ammo.
It looks like you have done a major overhall of how the animation driver ("step") works. Instead of lots of calls to exec_js, you now have just a single "Physics.PhysicsWorld.js.eval_js('startUpdates();" call. This combined with the switch to Cannon seems to have given you a big speed increase.
I cannot reproduces those errors (maybe just send the output of one) however try commenting out line 182 of
Util/Physics/__init__.py. The program is meant to shutdown after exectution, however I have only added a function to do this I have not properly checked it to make sure it works correctly.
I am not sure if I can minimise the data much more.
The speed increase has been noticable thankfully however something that I don't get is that the function
sendObjectDataof line 53 in
I do not understand your comment about sendObjectData. I can see were it is called in the JS end of things and see that the timing will depend on the number of world.bodies. I don't see any places were it is called from python. From python all I see is the call to startUpdates using exec_js. I would like to duplicate your results first before theorizing so can you explain how to do the timing tests?
Util/Physics/CannonHelpers.jsuncomment line 105, 129 and comment lines 127, 128, 134 and 135
main.pythere is actually an issue which I just noticed. For this to work line 127 stays the same however without applying any of these changes line 127 should actually be within the if statement of
Then open the physics view. (The button on the top left of the GLKView next to the close button)
I have updated the repo to reflect some of the changes here. I believe that I have got the physics as fast as possible within the limitations of what I can do... At about 60ms for the physics loop... (I would like it considerably faster but I'm not sure how)
I notice that you added the ability to play with the number of objects that you send back to python using a slider in the Physics pane. There is an obvious hickup that happens in the value of "time to send to python" every so many iterations and the amount of overhead is defenitely quite high. Seems like there would be a benefit to being able to transmit the entire list of objects in single compressed string so that you could just make a single call. The best would be to figure out how to share the list of objects in memory and not transfer it at all, but I doubt there is a way to accomplish that.
If I was using a ObjC
JSContextthen I could pass the objects around, however that would mean I could not have any callbacks as it is impossible to dynamically define them. That said if I create a
UIWebViewobject I could access its
JSContextobject. That would mean creating another ObjC class as I believe using the ui module uses
WKWebView. Is this correct @omz? or am I way off?
a WebVie appears to be SUIWebView. If you access the subviews of the objc object, it contains a UIWebView.
result is a JSContext!
Strangely, however, using setObject_ForKeyedSubscript seems to be MUCH slower than just passing a json object. Perhaps this is because the python to objc bridge has some overhead in creating a random object (using this with a json'd string is much faster). Also, I am not entirely sure how you would turn a generic object back into the python equivalent. Perhaps other data structures are faster, such as a generic ctypes Structure.
@JonB using a JSON string is so much faster thankyou for the suggestion. I will update the repo soon after I clean a little bit of the code up....
@JonB and @Cethric - this seems to imply that the JS runtime system is highly optimized for JSON encoding and decoding. That would not be a big surprise. The new method for transferring the data seems to also confirm that moving all the data in a single blob and with a single call back into python is the best strategy. I was thinking that the send_to_python call could also be running faster if the JSON method is handled via a http POST like mechanism rather then a http PUT. It certainly is interesting.