• JonB

    Taking N equal columns, with margin M between edge And adjacent cols:

    MWMWMWM

    Note for 3 columns, there will be 4 margins.

    W=(self.width-(N+1)margin)/N
    x = M+(col-1)
    (W+M)

    That assumes the anchor_point is on bottom left. Otherwise, for anchor_point at center, add a half width

    x = M+(col-1)*(W+M)+W/2

    Similarly for rows

    Anchor point what might be screwing with you.

    posted in Pythonista read more
  • JonB

    Tensorslow seems to be a pure numpy implementation of tensorflow, for whatever that is worth. There is an associated blog where they basically develop it from scratch, teaching you the basics of neural networks on the way.

    posted in Pythonista read more
  • JonB

    this might not be a pysmb issue, but old modules in pythonista.
    you might try in a pc first

    does pysmb depend on paramiko, or openssl or anything?

    posted in Pythonista read more
  • JonB

    yeah, ui.in_background would work, but you might need to wrap the actual label setting in a on_main_thread, or perhaps call set_needs_display.

    the delay allows the callback to return, but then runs the other bit on the main thread, so is also good -- though no other ui will be allowed, which maybe is a good thing in this case.

    See this thread for another option -- you can define a decorator that runs a method in a nee python thread, so the convienece of a decorator but without the drawback of reponsivity issues.
    https://forum.omz-software.com/topic/3495/label-text-not-displayed-until-end-of-button-action/5

    posted in Pythonista read more
  • JonB

    You could probably use UI.in_background around your loop. That is what it is for.

    You didn't show the rest of your context-- does your loop get called from a UI event like a button action? The key to remember -- NOTHING is updated in the UI until your button action (or touch moved, etc ) until your action returns. UI.in_background allows your button to return, queueing up the work on a shared background thread.

    posted in Pythonista read more
  • JonB

    try force quitting pythonista, and trying again. matplotlib takes a long time to import, and it you cancel while it is loading, things get screwy. force quit the app, and it should be workng again.

    also, since i see you are using stash, make sure you never tried to install matplotlib on your own.

    posted in Pythonista read more
  • JonB

    The way they original Ray cast code worked, it traced a ray for each pixel until it hit a wall:

    if level[xd][zd] !=0 or compteur>=scan:
                    break
    

    You would need to figure out which type of wall was hit (level[xd][yd]) then choose the texture that happens later based on that.

    posted in Pythonista read more
  • JonB

    I believe if you create a folder inside of iCloud, you can run from there...

    posted in Pythonista read more
  • JonB

    you can use tensorslow in pythonista

    posted in Pythonista read more
  • JonB

    cvp, that part looked ok to me, though I didn't try to run it.

    defs can reference variables that will exist in the scope of the function def, at the time it is called. As long as it exists before the functions are called.

    Maybe not the best practice for clarity, but otherwise seems ok to me. I think he just forgot the comment symbol?

    def a():
       print(b)
    try:
       a()
       print ('success')
    except NameError:
       print('fail')
    
    b=1
    try:
       a()
       print ('success.')
    except NameError:
       print('fail')
    

    posted in Pythonista read more
  • JonB

    If you have an error you don't understand, print the traceback, and paste back here!

    I will say that in your first code you had a line reading:

    ---------------
    

    Which of course is not valid python.

    In your second example you wrote

       ^ # some comment...
    

    The leading carrot before the comment character is also invalid python.

    I'm wondering if you just forgot the leading comment in both cases.

    posted in Pythonista read more
  • JonB

    this may be obvious, but be sure to set the frameLength prior to passing it to the recognizer, otherwise it will be getting duplicate data.

    what happens, i think, is that the buffer contains all of the samples, including the initial 0.375 or whatever sec. if you change frame length to 1024, you are telling the engine how many samples you consumed -- it wants to keep that buffer the same size, and not ever skip, so it calls you sooner next time, where everything shifted left, and new samples appended at the end. The least latency would be those end samples. This takes the latency down from .375 for me to maybe 20-30 msec.

    
    def handler(_cmd,buffer_ptr, samptime_ptr):
        if buffer_ptr:
            buffer = ObjCInstance(buffer_ptr)
            # a way to get the sample time in sec of start of buffer, comparable to time.perf_counter.  you can differnce these to see latency to start of buffer.  
            hostTimeSec=AVAudioTime.secondsForHostTime_(ObjCInstance(samptime_ptr).hostTime())
    
            #you can also check for skips, by looking at sampleTime(), which should be always incrementing by whatever you set the framelength to... if more than that, then your other processing is taking too long
    
            #this just sets up pointers that numpy can read... no actual read yet
            data=buffer.floatChannelData().contents
            data_np=np.ctypeslib.as_array(obj=data,shape=(buffer.frameLength(),))
    
            #Take the LAST N samples for use in visualization... i.e the most recent, and least latency
            update_path(data_np[-1024:])
    
            #this tells the engine how many samples we consumed ... next time, we will get samples [1024:] along with 1024 new samples
            buffer.setFrameLength_(1024)
    
            # be sure to append the buffer AFTER setting the frameLength, otherwise you will keep feeding it repeated portions of the data
            requestBuffer.append(buffer)
    

    posted in Pythonista read more
  • JonB

    I have not tried the frameLength trick, but I wonder if the copy is having trouble keeping up, resulting in dropouts. You could write those samples to a .wav file, then listen to it using the quicklook, to see if the quality is suffering. If you comment out the numpy stuff, does the lower frame still cause poor results? If not, there are some techniques we can use to speed that processing.

    Other possibilities would be to reduce sample rate (8000, 11050, or 22100), which should ease the processor burden.

    posted in Pythonista read more
  • JonB

    You would then, in the handler, set an attribute on your view with the power, which will get used next frame. (Or better yet, don't use update in the view, instead trigger the draw using the handler, this ensuring you only draw when updated info is available.

    If you want 60Hz frame rate, you'd want the frameLength to be 735 samples.

    posted in Pythonista read more
  • JonB

    def handler(_cmd,obj1_ptr,obj2_ptr):
        # param1 = AVAudioPCMBuffer
        #   The buffer parameter is a buffer of audio captured 
        #   from the output of an AVAudioNode.
        # param2 = AVAudioTime
        #   The when parameter is the time the buffer was captured  
        if obj1_ptr:
            obj1 = ObjCInstance(obj1_ptr)
            #print('length:',obj1.frameLength(),'sample',ObjCInstance(obj2_ptr).sampleTime())
            #print('format:',obj1.format())
            data=obj1.floatChannelData().contents
            data_np=np.ctypeslib.as_array(obj=data,shape=(obj1.frameLength(),)) #if you want to use it outside of the handler, use .copy()
            power=n.sqrt(np.mean(np.square(data_np)))

    posted in Pythonista read more
  • JonB

    Sorry, on my phone, away from my iPad... But yes, you get access to the buffer in the handler, and can compute metet directly there before passing on to the recognizer.

    The one issue is that iOS doesn't seem to respect the buffer size -- instead giving us 16535 samples - about .375 sec -- so you only get new data a few times per second.
    There is in theory a way to request fewer samples (thus faster call rate And lower latency), using the lower level audiounit, but I can't seem to get that working...

    posted in Pythonista read more
  • JonB

    By the way, the answer at the bottom of that stack overflow is what I've been playing around with... But the mixer is screwing up the inputNode, since the format are incompatible.

    posted in Pythonista read more
  • JonB

    Accelerate libraries are tricky.
    These are all c functions, so you have to use c.vDSP_blah.argtypes=[...] Etc
    Meaning you have to dig up all of the function prototypes, etc.

    However you can just use the equivalent numpy methods, which are probably very similar in speed, since they are also vectorized and probably use the same underlying BLAS code. There are some efficient ways to cast the buffer you get as a numpy array, without copying. Then to get average power you could use np.sqrt(np.mean(np.square(np_data)))
    To get rms, and np.max(np.abs(np.data)) to get peak.

    Sorry I meant to post some code on this..

    posted in Pythonista read more
  • JonB

    Have you tried using the debugger on that line? Or, try

    import pdb
    pdb.pm()
    

    Then print out the various attributes to figure out which one is None.

    If I had to guess, it appears that the error must be two lines before, iterating over self.agents. How about checking self. agents everywhere that agents can be modified:
    * After generate_agents call in setup
    * At start of update

    I.e
    print('value of agents ={}'.format(self.agents))

    If you ever find that self.agents is None instead of [], then something wonky happened!

    posted in Pythonista read more
Internal error.

Oops! Looks like something went wrong!