Optimisation of GLES update times
-
I have been working on a python port of OpenGLES for pythonista here.
Rendering is fine usually less than 0.005 seconds however the update time while it shows as less than 0.02 seconds on most occasions, visually it does not. I know where the bottle neck is in terms of CPU cycles and so on... However I don't really know how to optimises this (on iOS). My initial thoughts was to pass allot of the functions to a background thread as show on line 320 of this file. However this did not help.
I understand that allot of this is the result of a python to javascript bridge and while I would like a physics library I couldn't get a pure python library for it and I can't dynamically load frameworks and external libraries.So really what I am asking is. Is there anyway I can optimise what I have to work with the JS to PY bridge or is it better off just spending the time to write my own physics library (which I really don't believe I could do)
-
I do not understand your comment about sendObjectData. I can see were it is called in the JS end of things and see that the timing will depend on the number of world.bodies. I don't see any places were it is called from python. From python all I see is the call to startUpdates using exec_js. I would like to duplicate your results first before theorizing so can you explain how to do the timing tests?
-
In
Util/Physics/CannonHelpers.js
uncomment line 105, 129 and comment lines 127, 128, 134 and 135
Then inmain.py
there is actually an issue which I just noticed. For this to work line 127 stays the same however without applying any of these changes line 127 should actually be within the if statement ofRenderer.setup
Then open the physics view. (The button on the top left of the GLKView next to the close button)
-
I have updated the repo to reflect some of the changes here. I believe that I have got the physics as fast as possible within the limitations of what I can do... At about 60ms for the physics loop... (I would like it considerably faster but I'm not sure how)
-
I notice that you added the ability to play with the number of objects that you send back to python using a slider in the Physics pane. There is an obvious hickup that happens in the value of "time to send to python" every so many iterations and the amount of overhead is defenitely quite high. Seems like there would be a benefit to being able to transmit the entire list of objects in single compressed string so that you could just make a single call. The best would be to figure out how to share the list of objects in memory and not transfer it at all, but I doubt there is a way to accomplish that.
-
If I was using a ObjC
JSContext
then I could pass the objects around, however that would mean I could not have any callbacks as it is impossible to dynamically define them. That said if I create aUIWebView
object I could access itsJSContext
object. That would mean creating another ObjC class as I believe using the ui module usesWKWebView
. Is this correct @omz? or am I way off?
-
a WebVie appears to be SUIWebView. If you access the subviews of the objc object, it contains a UIWebView.
-
w=ui.WebView()
objc_util.ObjCInstance(w).subviews()[0].valueForKeyPath_('documentView.webView.mainFrame.javaScriptContext')result is a JSContext!
-
Strangely, however, using setObject_ForKeyedSubscript seems to be MUCH slower than just passing a json object. Perhaps this is because the python to objc bridge has some overhead in creating a random object (using this with a json'd string is much faster). Also, I am not entirely sure how you would turn a generic object back into the python equivalent. Perhaps other data structures are faster, such as a generic ctypes Structure.
-
@JonB using a JSON string is so much faster thankyou for the suggestion. I will update the repo soon after I clean a little bit of the code up....
-
@JonB and @Cethric - this seems to imply that the JS runtime system is highly optimized for JSON encoding and decoding. That would not be a big surprise. The new method for transferring the data seems to also confirm that moving all the data in a single blob and with a single call back into python is the best strategy. I was thinking that the send_to_python call could also be running faster if the JSON method is handled via a http POST like mechanism rather then a http PUT. It certainly is interesting.