I don't know if anyone's interested, but I took a combination of the Sketch example, along with some other examples I've seen online, and come up with the following.
It's a simple Neural Network class, prepare data, train it and it can hopefully guess what you draw to test it with.
- Draw three positive image, all the same. I.e. Draw three smiley faces.
- Then draw three negative images, all the same. I.e. Draw three sad faces.
- Then press the Train button.
- OK so now it's ready. Draw either a copy of the positive or negative image, and see if it gets it right 😀
I don't claim this is the best code ever, or even efficient algorithm and could be tuned much better, but thought I'd share it anyway 😀
- replace mv = ... by
#mv = ui.View(canvas_size, canvas_size) mv = ui.ScrollView(canvas_size, canvas_size)
- add this at the end
mv.present('full_screen', orientations='landscape') #=== added wm = mv.width hm = mv.height for sv in mv.subviews: wm = max(wm,sv.x+sv.width) hm = max(hm,sv.y+sv.height) #print(w,h,wm,hm) mv.content_size = (wm,hm) mv.scroll_enabled = False scroll_right = ui.ButtonItem() scroll_right.title = '➡️' def scroll_right_action(sender): ws,hs = mv.content_offset ws = min(ws+w/2,wm-w) mv.content_offset = (ws,hs) scroll_right.action = scroll_right_action scroll_left = ui.ButtonItem() scroll_left.title = '⬅️' def scroll_left_action(sender): ws,hs = mv.content_offset ws = max(ws-w/2,0) mv.content_offset = (ws,hs) scroll_left.action = scroll_left_action scroll_bottom = ui.ButtonItem() scroll_bottom.title = '⬇️' def scroll_bottom_action(sender): ws,hs = mv.content_offset hs = min(hs+h/2,hm-h) mv.content_offset = (ws,hs) scroll_bottom.action = scroll_bottom_action scroll_top = ui.ButtonItem() scroll_top.title = '⬆️' def scroll_top_action(sender): ws,hs = mv.content_offset hs = max(hs-h/2,0) mv.content_offset = (ws,hs) scroll_top.action = scroll_top_action mv.right_button_items = [clearAll_button, scroll_right, scroll_left, scroll_bottom, scroll_top]
You will have 4 menu buttons to scroll in the 4 directions
I hope it works because I was not able to test on an iPhone 5s but it is ok on my iPad mini 4 in portrait mode where I don't see all like you.
Some labels of code do not have a width, thus set as 1024, that's the reason why you can scroll too much at right...
Put this alignment line to check
lb.text='OK now lets see if it can Guess right' lb.alignment = ui.ALIGN_RIGHT
@Matteo Better solution, use this code at end, to swipe with two fingers...
#=== added wm = mv.width hm = mv.height for sv in mv.subviews: wm = max(wm,sv.x+sv.width) hm = max(hm,sv.y+sv.height) #print(w,h,wm,hm) mv.content_size = (wm,hm) mv.scroll_enabled = True mvo = objc_util.ObjCInstance(mv) mvo.panGestureRecognizer().setMinimumNumberOfTouches_(2) mvo.panGestureRecognizer().setMaximumNumberOfTouches_(2)
Tested on iPhone 5s
Don't forget to replace mv = ui.View by ui.ScrollView
Hi @cvp, wondeful, it works perfectly! Very exciting! Now I can use the jmv38 code also in my little phone :-)
Unfortunately when I touch button 1/ Train after adding the draws the code tells me "Type Error: integer argument expected, got float".
I will perform some tests and if needed I will post here the full traceback.
Thank you again
i tried your code: it works fine on my ipad. No error.
But i can scroll only horizontally. How could i scroll vertically too?
@jmv38 I think your view does not need to scroll vertically.
If I add a label at y=1000, vertical scroll with two fingers works
@jmv38 uncomment my line
To check vertical dimension and usage
@Matteo for your request 2/: it already works. Dont press reset, just start another drawing in one of the boxes and it will replace the previous one. Then tap train.
Hi @jmv38 , I'm sorry for the delay, I've been busy and never used Pythonista for one week..Now I tested something and solved the problem related to integer argument expected of version 14 by adding
int(argument)where needed in your code and since I use python 2.7 by default, I forgot to put the command
#!python3on the first line of the script (without it I had another problem with python 2.7). Now all work well!
For request 2 about erasing only one draw, I can't understand, sorry: if I try to draw something else on the square the old draw remains. Am I wrong?
Anyway thank you again (also @cvp) for support.
@Matteo For your request 2, when you draw on an existing drawing, this one stays until you terminate your move, when your finger leaves the screen.
@cvp Hi, sorry but it doesn't work with me , the reason could be I use Pythonista 3.1 (301016)?
When I draw something on a square, leave my finger from the screen, try to draw something else on the same square by touching again the screen on the square, and leave my finger from the screen , the old draw remains, it doesn't disappear to show anly the new one.
But don't worry, it is not so important, the script works well now.
@Matteo You're right, but try after a training run. If I redraw before the training, both drawings stay, if I redraw after a training, the first drawing disappears
@jmv38 No problem at all 😀
Hi @jmv38, great job!
Not sure if you have already known that coreML can be used in Pythonista. Snippet in OMZ’s gist.
@jmv38 and @mkeywood : can the script recognize also the same picture drew in different positions inside the three squares? I tried it but maybe I'm wrong with something because it doesn't work, I mean if I draw the same picture in different positions, I can't obtain the good choice of the picture inside the guess square.
@st84 hi. I dont see which omz post you are referring to. Can you put a link? thanks
@Matteo my version ofthe script tries to recognize 3 different objects. So draw the same object in 2 boxes will not work: the software assumes they are different.
The initial version of mkeywood is different.
- how much memory is used? where is the model stored?
- what is the NN structure?
- is it fully local?
@jmv38 the models are downloaded from https://docs-assets.developer.apple.com/coreml/models/MobileNet.mlmodel and copied in a local file, then next time, it is entirely local.
Big file of 170 MB