what were you using to render the image?
I have never found a great way to do realtime image updates. It may be possible with some low level library, but up until now, but ios doesnt really allow access to screen buffers directly, except maybe in some low level video libraries (where you are passed a buffer, and expected to fill it). That deserves another look.
A few things I have found:
Converting to low res jpg is a lot faster than, say, bit accurate png. I use that method in my first attempt at a matplotlib pinch/pan view:
https://github.com/jsbain/objc_hacks/blob/master/MPLView.py (see updateplt). in that method, i have a thread that updates the ImageView using a reused BytesIO (reusing it saves a little time). updateplt is done in a thread, and has some locks to know when the conversion is complete, and basically tosses other calls, so that it updates as as fast a rate without affecting ui responsivity. Also, for the updates while moving, I use a very low resolution jpg for rendering, which is much much faster than doing bit accurate png. That allows for a pretty responsive feel, and maybe 20 fps or something, i forget. then it renders the full dpi after touch ends.
While working on porting an appleii simulator, i experimented with several ways of rendering to a ui.Image from a numpy array, very similar to what you want. here's an example speed compare:
At the time, I think I found that matplotlib.imsave was faster than PIL Image.fromarray, which is what is implemented i the screen.py -- though I think in the latest pythonista version, Pillow's fromarray is much faster. In the speedtest, a 300x300 is rendered to ui.Image at about 12 fps on my old Ipad3 with Image.fromarray, versus 6fps using matplotlib.imsave. I used a similar system to basically renders as fast as possible, and calls to update get queued/grouped if there is already an update in progress (you may be able to use screen.py directly, though probably would want to switch over to the faster method). I am actually using a custom view draw rather than an imageview, though I forget if there was a good reason for that.For your smudge application, one thing you might consider is to have an imageview, with a custom view on top (similar to the applepy screen above) that only renders the portion of the view that has been touched. i.e you keep track which pixels are dirty, and only render the bounding box of those dirty pixels. you would keep track of the corner of that bounding box, so that you can then use .draw() with the right pixel offsets. Then in the background, you would be rendering the "big" image, maybe when the finger lifts, and reset the dirty pixels.
A variation on this would be to divide the image up into small chunks, and render ui.Images for a chunk only when that section has been affected, the. your draw() method would always ui.Image.draw all of the chunks at the proper locations. that avoids having to ever render the big image.