omz:forum

    • Register
    • Login
    • Search
    • Recent
    • Popular

    Welcome!

    This is the community forum for my apps Pythonista and Editorial.

    For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.


    Real time audio buffer synth/Real time image smudge tool

    Pythonista
    4
    61
    35836
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Mederic
      Mederic last edited by Mederic

      I will clean my code a little bit and post a link later.

      I had already try that trick with the small imageView around the cursor. The thing is, when doing big strokes with the smudge tool, the small imageView isn’t big enough (and making it bigger with time ends up causing lag like when there is no small imageView), so I had to test when the cursor leaves the small imageView area during the stroke and update the big image when that happens before moving the small imageView back to the cursor. For some reason, it didn’t really improve anything compared to not using a small imageView. Somehow, the big image updates were still expensive, and although I could make them happen less often by making the small imageView bigger, the small imageView updates would then cost more, and in the end, no real improvement.

      However, instead of a small imageView, directly using a custom view allowed me to ask the code to “convert the dirty portion of the numpy array and draw it” in one line in the draw def. Somehow it improved things a lot, but still caused time glitches/lag when the big image updates were needed.

      Now I am experimenting with several small custom views, basically relaying each other when the cursor leaves their respective areas, so that I only have to update the big image when the last small view has been used. I am using 8 views and It’s almost perfect.

      I will try your variation though. It could definitely be perfect as well.

      Btw, I use fromarray to render the image ;)

      And I don’t know anything about IOSurface and CALayer, I am (kind of) new to this kind of librairies

      JonB 1 Reply Last reply Reply Quote 0
      • Mederic
        Mederic last edited by Mederic

        So I cleaned up my code. I did my best but probably didn’t respect some conventions...
        I wrote a lot of comments and explanations though.

        For my IPad Pro 12.9, it is close to real time, although not as reactive as the Procreate Smudge tool, which I find fantastic, but, to my opinion (and taste), still better than a lot of smudge tools I tried in different apps, so I am kind of happy with it :)

        You can use the Apple Pencil by setting applePencil=True
        You can see the debug mode by setting debug=True

        https://gist.github.com/medericmotte/a570381ca8adfcec6149da2510e81da2

        By the way, I tried the method where you split the canva in several sub views in a grid, it seemed like having too many views at the same time is also causing lag.

        1 Reply Last reply Reply Quote 0
        • enceladus
          enceladus last edited by

          May be try to use scene and shader.

          1 Reply Last reply Reply Quote 0
          • Mederic
            Mederic last edited by Mederic

            It might work but then I’d still have to reload the texture as the numpy array changes. But maybe it woul be faster.

            To avoid the constant reloading I would have to compute the smudge effect directly in the OpenGL code, but it has two issues for me:

            • The texture would have to be stored with float data because smudging int8 causes some ugly spot around the white areas.
            • I don’t know how I would change the texture in real time directly within the OpenGL code. Do you know a way to do that? I thought they were read-only here, but I do remember hearing about OpenGL image buffers, is it possible in Pythonista?
            1 Reply Last reply Reply Quote 0
            • Mederic
              Mederic last edited by

              There might be a simple way to do it with the render_to_texture function. I don’t know how fast it would be but I am gonna give it a try today.

              1 Reply Last reply Reply Quote 0
              • enceladus
                enceladus last edited by

                Look at Examples/games/BrickBreaker.py (particularly wavy option)

                1 Reply Last reply Reply Quote 0
                • enceladus
                  enceladus last edited by

                  FWIW my GitHub directory contains few basic examples on scene and shader. https://github.com/encela95dus/ios_pythonista_examples

                  1 Reply Last reply Reply Quote 0
                  • Mederic
                    Mederic last edited by Mederic

                    Yeah I’ll try that, but again, it’s the speed of render_to_texture() that will tell if it’s enough for real time.Because a function like wavy needs a texture of the image at frame n to display the image at frame n+1, but then I need to render that image to a texture so that the shader can process it and display the image at frame n+2, etc

                    1 Reply Last reply Reply Quote 0
                    • Mederic
                      Mederic last edited by Mederic

                      Actually, now I think about it, the problem is that scene and shaders compute their display only at 60 fps, and I think it’s not enough because for fast strokes you need to compute more often than that (otherwise you will have holes or irregularities between the smudge spots).

                      In my code I use a while(true) loop to compute the smudging (outside of the ui class) and its rate is only limited by the (very short) time numpy takes to add arrays.

                      By the way, somehow I now that it’s not good to use while(True) loops that way, but I don’t know what is the good practice to do the equivalent, at the same speed. Because of that loop, for example, right now when I close the ui window it doesn’t stop the code, and I need to do it manually with the cross in the editor. What should I do about that?

                      JonB 1 Reply Last reply Reply Quote 0
                      • Mederic
                        Mederic @JonB last edited by Mederic

                        @JonB :

                        So back to the topic of real time audio, I modified your code to have a sawtooth instead of a sine, and then implemented a simple lowpass filter. There is an unwanted vibrato sound happening in the background for high frequencies, which is probably an aliasing behavior due to the inability of the program to keep a perfect rate? I am not sure. If I set the sampleRate to 44100, the vibrato seems less important (which kind of supports my aliasing assumption? Again, not sure) but still noticeable. Interestingly, I tried sampleRate= 88200 and the unwanted vibrato was gone. The thing is, when one changes the sampleRate, the filter actually behaves differently. Basically, taking a higher sampleRate with the same filter algorithm will tend to make its cutoff higher, so, for the comparison to be “fair”, with a 88200 sampleRate I replaced the 0.9 in the render method below by 0.95, and unfortenately, the unwanted vibrato was back :(

                        I also thought maybe it was a problem with the data precision and error accumulation so I tried scaling up the data in the render method and renormalizing it in the end for the buffer but that didn’t fix the issue.

                        To hear the unwanted vibrato with a 11000 sampleRate, all you need to do is add an attribute

                        self.z=[0,0]

                        in the AudioRenderer class and then change the render method this way (to have a filtered sawtooth):

                        def render(self, buffer, numFrames, sampleTime):
                        		'''override this with a method that fills buffer with numFrames'''
                        		#print(self.sounds,self.theta,v.touches)
                        		#The scale factor was to try to win some precision with the data. Scale=1 means it doesn’t scale
                        		scale=1
                        		z=self.z
                        		for frame in range(numFrames):
                        			b=0
                        			for t in self.sounds:
                        				f,a=self.sounds[t]
                        				theta=self.theta[t]
                        				#dTheta=2*math.pi*f/self.sampleRate
                        				dTheta=(f*scale)/self.sampleRate
                        				#b+=math.sin(theta) * a
                        				b+=((theta%scale)*2-scale)*a
                        				theta += dTheta
                        				#self.theta[t]=theta %(2*math.pi)
                        				self.theta[t]=theta%scale
                        			z[0]=0.9*z[0]+0.1*b
                        			z[1] = 0.9*z[1]+0.1*z[0]
                        			buffer[frame]=self.z[1]/scale
                        		self.z=z
                        		return 0
                        	
                        
                        1 Reply Last reply Reply Quote 0
                        • JonB
                          JonB @Mederic last edited by JonB

                          @Mederic Re: rendering numpy arrays, iosurface/calayer is amazingly fast:

                          Here is an iosurface wrapper that exposes a numpy array (w x h x 4 channels) and a ui.View:
                          https://gist.github.com/87d9292b238c8f7169f1f2dcffd170c8

                          See the notes regarding using .Lock context manager, which is required.
                          Just manipulate the array inside a with s.Lock(), and it works just like you would hope.

                          On my crappy ipad3, I get > 100 fps when updating a 50x50 region, which is probably plenty fast.

                          edit: i see you are using float arrays. conversion from float to uint8 is kinda slow, so that is a problem.

                          1 Reply Last reply Reply Quote 1
                          • JonB
                            JonB @Mederic last edited by

                            @Mederic regarding while True:

                            doing while v.on_screen:
                            or at least checking on_screen is a good way to kill a loop once the view is closed.

                            1 Reply Last reply Reply Quote 0
                            • Mederic
                              Mederic last edited by Mederic

                              Ok thank you.

                              I ran your code and it is very fast but I have a question (and as I am still not familiar with the libraries you use, it might take a while to figure out the answer on my own):

                              The printed fps is around 1000 on my IPad Pro.

                              Now, I computed the fps of my PythoniSmudge code and I realize it’s important to have two fps data here:

                              • The computation fps of my while(True) loop was around 300
                              • The fps of my Views (computed by incrementing an N every time a draw function is over) was 40

                              That is important because the first fps makes sure the smudge tool is internally computed continuously enough to avoid having irregularities and holes in the path on the final image (nothing to do with lag), (which is the case with computation fps = 300), and the second fps makes sure that my eye doesn’t see lag on the screen (which is the case as soon as view fps>30)

                              My question is, what does your fps=1000 compute exactly? It seems to only be the computation fps but maybe I am wrong and it somehow includes the view fps as a part of it, but I would really need to isolate the view fps because that is really what causes the sensation of lag.

                              If really 1000 IS the view fps, then it’s more than enough.

                              JonB 1 Reply Last reply Reply Quote 0
                              • JonB
                                JonB @Mederic last edited by JonB

                                I believe it is the actual view FPS but you might want to increase N to get better timing. The redraw method should effectively block while data is copied over.

                                What you would do is have a single view, from the iosurface. You could try s.array[:,:,0]=imageArray, but that may be slow since it must copy the entire image.

                                Better would be to determine the affected box each touch_moved, then only copy those:

                                with s.Lock():
                                    
                                s.array[rows,cols,0]=imageArray[rows,cols]
                                

                                (Where rows And cols are indexes to affected pixels)

                                To keep monochrome, you would want your imageArray to be sized (r,c,1)
                                to allow broadcasting to work

                                with s.Lock(): 
                                s.array[rows,cols,0:3]=imageArray[rows,cols]
                                

                                This way you only copy over and convert the changed pixels each move.

                                1 Reply Last reply Reply Quote 0
                                • JonB
                                  JonB last edited by

                                  By the way... You might get acceptable performance with your original code if you use pil2ui with a jpeg instead of png format during touch_moved, then switch over to the png during touch_ended.
                                  Also, you might eek out some performance by using a single overlay view, but rendering N ui.images, that are drawn during the view's draw method. That way you don't have the overhead of multiple views moving around. You would keep track of the pixel locations. See ui.Image.draw, which let's you draw into an image content. I think draw itself is fast, if you have the ui.Images already created.

                                  That said, the iosurface approach should beat the pants off these methods.

                                  1 Reply Last reply Reply Quote 0
                                  • Mederic
                                    Mederic last edited by Mederic

                                    Ok I am going to give it a try. Regarding the N ui.Images method, I actually did that before and it was lagging. I think that’s because at every frame it dynamically draws the N ui.images as opposed to my current approach where at each frame the set_needs_display() method is used for only one miniview, the other ones are just “inactive” or “frozen”.

                                    Also, I got a big improvement by only sending a set_needs_display() request every 3 touch_moved.

                                    1 Reply Last reply Reply Quote 0
                                    • Mederic
                                      Mederic last edited by Mederic

                                      Regarding the use of a float array: it’s kind of necessary for the smudge to be beautiful, otherwise, with int8, you get visible ugly and persistent spots around the white areas. What causes that is that if, for instance, you have a pixel of value 254 next to a pixel of value 255, and smudge on them, then at the first frame the 254 pixel will try to become, say, 254.2, but as it is an integer, it will stay equal to 254, hence the same thing will happen at the second frame, the third frame, etc. It will keep trying to go to 255 but fail and get completely absorbed to 254. In the end, the smudge won’t have affected it, and it gets worst: it will stay equal to 254 whatever number of strokes you make on it. On the other hand, if you use floats, then at the first frame the 254.0 pixel will become 254.2 (and get rounded to 254 for display, but stay 254.2 in the array), and at the second frame it will become, say, 254.4, and maybe then 254.55, which will be displayed as a 255 pixel, so the smudge will really have affected it correctly.

                                      JonB 1 Reply Last reply Reply Quote 0
                                      • Mederic
                                        Mederic last edited by Mederic

                                        I tried with IOSurface, and it’s really extremely fast!

                                        I didn’t have to change my code too much so I will took a few minutes to clean things up and post a link!

                                        Thank you!!!

                                        1 Reply Last reply Reply Quote 0
                                        • Mederic
                                          Mederic last edited by Mederic

                                          Here it is!!!

                                          https://gist.github.com/medericmotte/37e43e477782ce086880e18f5dbefcc8

                                          It made the code so much simpler and faster!

                                          Thank you so much!

                                          PS: Have you seen my post above about the “aliasing” vibrato in the real time audio buffer code? I don’t want to take too much of your time but now that one problem has definitely been fixed, I kind of hope the same for audio :)

                                          1 Reply Last reply Reply Quote 0
                                          • JonB
                                            JonB last edited by

                                            I have not run the audio issue yet.. but two possibilities:

                                            1. precision issue. The samples are float32, not double. For filtering you probably want to work as doubles before writing.
                                            2. overrun -- if your code falls behind, iOS will skip frames. There are some fields in the timecode structure that help tell you what the time that the buffer will start, etc., But I haven't did into them.
                                              Going to high sample rate means your code has less time to produce the same number of samples, increasing chance of overrun. You could compare the time that render takes to numSamples/sampleRate -- render time should be less than say 80% of the actual audio time. That's why I started with a low sample rate.

                                            I tried speeding things up with numpy, but got bad results..care needs to be taken with how time is treated. Since frequency and amplitude change discretely, there might be a better design that ensures continuity of samples.

                                            1. have you tried writing your samples to a wave file then playing it back? I.e is your filter and logic setup correctly?

                                            Also, for sawtooth, I would think scaling the amplitude correctly is super important, because the signal must stay between -1 and +1, otherwise you saturate and that will produce harmonics. I haven't really looked at your code, but it might be worth mocking up the code to write to wave and see.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Powered by NodeBB Forums | Contributors