• @craigmrock Shift left the two def str and scale_steps, they are inside init

  • Thanks for sharing this JonB. This technique looks handy. I will give this a try.

  • Thanks. At least I'm not crazy. This really limits the usefulness of the debugger. Sometimes the call chain can be complex and stepping to get to the code I need to look at isn't feasible. I might try a few more things and report back.

  • got it. ios really has no native support for svg.
    PNG, JPG or PDF are what you will want to use.

    For single color, one option is to use a font creation tool (there are many) that let you convert your svg to a svg font, then use svg2ttf to convert to a ttf font, which can be installed systemwide, or can be loaded temporarily in app. Then, you can create images from the glyphs. Not really worth it in my opinion...

    Another option which looks promising but i have not explored yet:
    https://github.com/Kozea/CairoSVG can parse svg to produce a tree object, which then uses
    https://github.com/Kozea/cairocffi/ to convert to png. cairocffi might work without modification, or a small few, since it relies on ffi in a similar way that objc_util does, and i think the ios cairo is similar if not identical to the mac version. If not, you might also be able to use that tree to write your own output to ui.Path commands in an ui.ImageContext, and get_image provides the ui.Image.

    A final option -- convert svg to vector PDF. I believe ui.Image.from_file can read pdf. I don't know hw they compare size wise.. some programs might add a lot of garbage to a pdf, ImageMagick probably would do it well.

  • @omz
    I have a succes storey with your fix script (June 19/18). Here’s what I did. Pasted approx 400 #\input texinfo ‘s into a text file. Pasted the contents (all one line, no line breaks) into your script as a string (replacing the existing ‘#\input texinfo’. Changed the folder permissions for the two lib folders to read/write (temporarily).ran your script. It fixed all but a half dozen of the files. Went into the folders manually. The unchanged files stood out cause they didn’t have today’s date of modification. Manually pasted in the approx 400 from the original text file into those few.
    Successfully uploaded to app connect where my app is now in Beta on test flight.
    Thank You very much!

  • @JonB said:

    You can connect to each device once, as long as you hang into references to the peripheral object and Characteristic.

    The trick is then keeping track of the reads, since did_update_value does not include the peripheral, so you cannot tell which is which.

    so pseudo code would be:

    discover devices connect to devices, storing peripheral in a list or dict discover service/characteristic, and store the characteristic once you have a characteristic, i would have a loop that pools: for p in peripherals: current_p=p (store as a global) p.read_characteristic_value(C) (sleep for a while p, or use threading Event etc to await did_update_value)

    in did_update_value, check current_p to get uuid, then store the read value somewhere, print it out, whatever, and set the Event flag to awake the main loop

    How can I hang into references to peripheral object and Characteristics? I stored the service and the discovered characteristics in a list and used it for to distinquish between both nodes. I understood the procedure which you described but I don't know how to integrate that and put did_update value in the state to change between peripherals

  • On the other hand, using multiprocessing on desktop did indeed reduce noticeably the overall processing time. So I decided to include it as a conditional feature, after platform detection.

    If I decide to implement later an optimization routine based on a web API, threading or async I/O will definitely be required.

  • as for zipmport, you would do
    sys.path.insert(0,'pathtozipfile.zip')
    then simply import the name

    https://docs.python.org/2/library/zipimport.html

    so your main script might look like
    mainscript.py:

    os.chmod(file, 0o100444)
    sys.path.insert(0,'myfile.zip')
    import myapp
    myapp.main()

    where you include a file myapp.py in myfile.zip, which includes a main() function that has your app's logic.

    for initial installation, you might point them to a bit.ly link which includes an executable url:
    https://forum.omz-software.com/topic/3929/new-url-scheme

  • Well, maybe in iOS 12 with Siri Shortcuts the story may be a little different. If we can use at least URL schemes with voice commands it will be some progress. Has anyone around here had the opportunity to play with the developer beta and Pythonista?

  • @omzI have a suggestion that in the documentation when something is a subclass it always says so and links to the parent. eg at here

    "class ui.Button"

    becomes

    "class ui.Button (subclass of ui.View)"

    I have often forgotten something is a subclass or not known

  • you must move your main script to Documents ( a folder under This Ipad)

    when you open another apps file, that path is read only

  • yeah, that is an error in the docs -- everything in UI that refers to images are ui.Image, not PIL.Image.

    The ListDataSource should work, in theory, though it may or may not put the image where you would like.

  • @ccc thanks, that cleared up one of the errors I was getting through experimenting.
    Craig

  • @cvp Thank you so much for your comment. I did’nt know this function. However, as far as I read a description about recognize, recognize function can translate a voice to text from recorded file.
    I hope that my application can receive a voice directly, but if I implement following procedure, my hope may be able to be implemented.

    Sound recorder is executed. Record a user voice and save it. Call speech.recognize Translate voice file made by No.2 section to text file.
  • @JonB
    I wanted to handle links opened from Tweetbot and links opened from RSS Reader with as few taps as possible.
    It seems to be able to process it with the same number of taps if I use WebView. Thank you.

  • You need to use AVCapturePhotoOutput instead of AVCaptureStillImageOutput to get raw output. There are some other changes needed as well, you will have to read the docs or examples that go along with that class.

  • Yes, I understood that Editorial need "full access" to enable users to configure other folder to sync.

    I trust Editorial not to access other folders, however, maybe someone else does not. So there's still some concerns about privacy here.

    If some users would give up the convenience of choosing custom folder in Dropbox to sync, could they get an option for "App folder only" access type?

  • @JonB I finally have done sample program which I want to do. In my recognition, all view is regarded as a component like button, slider on the main canvas. And each components should be added to main canvas with add_subview().
    Thank to your comment, I did it.

Internal error.

Oops! Looks like something went wrong!