• JonB

    cvp, that part looked ok to me, though I didn't try to run it.

    defs can reference variables that will exist in the scope of the function def, at the time it is called. As long as it exists before the functions are called.

    Maybe not the best practice for clarity, but otherwise seems ok to me. I think he just forgot the comment symbol?

    def a():
       print ('success')
    except NameError:
       print ('success.')
    except NameError:

    posted in Pythonista read more
  • JonB

    If you have an error you don't understand, print the traceback, and paste back here!

    I will say that in your first code you had a line reading:


    Which of course is not valid python.

    In your second example you wrote

       ^ # some comment...

    The leading carrot before the comment character is also invalid python.

    I'm wondering if you just forgot the leading comment in both cases.

    posted in Pythonista read more
  • JonB

    this may be obvious, but be sure to set the frameLength prior to passing it to the recognizer, otherwise it will be getting duplicate data.

    what happens, i think, is that the buffer contains all of the samples, including the initial 0.375 or whatever sec. if you change frame length to 1024, you are telling the engine how many samples you consumed -- it wants to keep that buffer the same size, and not ever skip, so it calls you sooner next time, where everything shifted left, and new samples appended at the end. The least latency would be those end samples. This takes the latency down from .375 for me to maybe 20-30 msec.

    def handler(_cmd,buffer_ptr, samptime_ptr):
        if buffer_ptr:
            buffer = ObjCInstance(buffer_ptr)
            # a way to get the sample time in sec of start of buffer, comparable to time.perf_counter.  you can differnce these to see latency to start of buffer.  
            #you can also check for skips, by looking at sampleTime(), which should be always incrementing by whatever you set the framelength to... if more than that, then your other processing is taking too long
            #this just sets up pointers that numpy can read... no actual read yet
            #Take the LAST N samples for use in visualization... i.e the most recent, and least latency
            #this tells the engine how many samples we consumed ... next time, we will get samples [1024:] along with 1024 new samples
            # be sure to append the buffer AFTER setting the frameLength, otherwise you will keep feeding it repeated portions of the data

    posted in Pythonista read more
  • JonB

    I have not tried the frameLength trick, but I wonder if the copy is having trouble keeping up, resulting in dropouts. You could write those samples to a .wav file, then listen to it using the quicklook, to see if the quality is suffering. If you comment out the numpy stuff, does the lower frame still cause poor results? If not, there are some techniques we can use to speed that processing.

    Other possibilities would be to reduce sample rate (8000, 11050, or 22100), which should ease the processor burden.

    posted in Pythonista read more
  • JonB

    You would then, in the handler, set an attribute on your view with the power, which will get used next frame. (Or better yet, don't use update in the view, instead trigger the draw using the handler, this ensuring you only draw when updated info is available.

    If you want 60Hz frame rate, you'd want the frameLength to be 735 samples.

    posted in Pythonista read more
  • JonB

    def handler(_cmd,obj1_ptr,obj2_ptr):
        # param1 = AVAudioPCMBuffer
        #   The buffer parameter is a buffer of audio captured 
        #   from the output of an AVAudioNode.
        # param2 = AVAudioTime
        #   The when parameter is the time the buffer was captured  
        if obj1_ptr:
            obj1 = ObjCInstance(obj1_ptr)
            data_np=np.ctypeslib.as_array(obj=data,shape=(obj1.frameLength(),)) #if you want to use it outside of the handler, use .copy()

    posted in Pythonista read more
  • JonB

    Sorry, on my phone, away from my iPad... But yes, you get access to the buffer in the handler, and can compute metet directly there before passing on to the recognizer.

    The one issue is that iOS doesn't seem to respect the buffer size -- instead giving us 16535 samples - about .375 sec -- so you only get new data a few times per second.
    There is in theory a way to request fewer samples (thus faster call rate And lower latency), using the lower level audiounit, but I can't seem to get that working...

    posted in Pythonista read more
  • JonB

    By the way, the answer at the bottom of that stack overflow is what I've been playing around with... But the mixer is screwing up the inputNode, since the format are incompatible.

    posted in Pythonista read more
  • JonB

    Accelerate libraries are tricky.
    These are all c functions, so you have to use c.vDSP_blah.argtypes=[...] Etc
    Meaning you have to dig up all of the function prototypes, etc.

    However you can just use the equivalent numpy methods, which are probably very similar in speed, since they are also vectorized and probably use the same underlying BLAS code. There are some efficient ways to cast the buffer you get as a numpy array, without copying. Then to get average power you could use np.sqrt(np.mean(np.square(np_data)))
    To get rms, and np.max(np.abs(np.data)) to get peak.

    Sorry I meant to post some code on this..

    posted in Pythonista read more
  • JonB

    Have you tried using the debugger on that line? Or, try

    import pdb

    Then print out the various attributes to figure out which one is None.

    If I had to guess, it appears that the error must be two lines before, iterating over self.agents. How about checking self. agents everywhere that agents can be modified:
    * After generate_agents call in setup
    * At start of update

    print('value of agents ={}'.format(self.agents))

    If you ever find that self.agents is None instead of [], then something wonky happened!

    posted in Pythonista read more

Internal error.

Oops! Looks like something went wrong!