Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Machine Learning
-
v12 https://gist.github.com/8fa3ac1516ef159284f3090ba9494390
big pef improvement with prior normalization of image size and position -
Huge update! Now you can inspect the states and weights of the internal layers and get a feeling of how the network decides!
https://gist.github.com/ef4439a8ca76d54a724a297680f98edeAlso I added a copyright because this is a lot of work. @mkeywood if you are uncomfortable with this let me know.
Here is a video showing the code in action
https://youtu.be/yBR80KwYtcE -
screenshot
-
300 views and not a word...???
Hey guys (and gals), if you like the code i share, some encouragement is always welcome!
Thanks. -
v14 colors modified to better understand the network computation.
now it is much easier.
https://gist.github.com/94a8d1474a6ef6e49972518baa730f1b -
Can you explain what is happening in the bottom set of plots?
-
@JonB hello.
the bottom plot shows:during training: the training set and the color of each element corresponds to the recongnition result of this element.
during guessing (what you can se above):
the states of input layer0, then weights 0>1, then states of layer1, then weights 1>2, then states of layer2, then weights 2>3, then states of layer2, then output layer3 = 3 neurons.First i display the weights in black and white (white=positive, black=negative), then i display in color and in sequence the signal propagation through the network:
- the neuron states: green means active (1) and red means inactive (0).
- neurons x weights values: Green means positive influence on the neuron (excitation) and red negative influence (damping). The brighter the stronger. I.ll call this wxn for weights x neuron.
The colors of the 3 neurons of layer 3 (red,red,green) are the same under the sketches: they correspond to the same data.
the group of 3 blocs on the left of these neurons (weights2>3) are the input weights of each neuron x the input of previous layer.Let’s analyse this picture
Here you can see that a single wxn is mainly responsible for the 3 neuron states: its the weight (1/2, 0/2) ((0,0) is top left of each bloc), excited by neuron (1/2, 0/2) (green). Other neurons are desactivated (red) so the wxn are insignificant (black). This neuron is damping the 2 first neurons (red wxn) and exciting the last one (green wxn).
Now let’s see why neuron (1/2, 0/2) of layer2 is excited. Look in wxn1>2 at the bloc (1/2, 0/2). Several wxn are green, they are responsible for the response. There is no opposite response (red).
Let’s look at the stonger response wxn (2/4,3/4). The previous layer neuron corresponding is green too (active). look at the corresponding wxn0>1: you can see the the top left part of the ‘o’ drawing is green = detected.
so we can say the ‘o’ has been detected because of its top left part, which is not present in the 2 other drawings. That makes sense.
And the 2 other choices have been rejected for the same reason (it might not be the case).I hope this explaination is what you were expecting.
-
here is a video that is the same explanation
https://youtu.be/L-6ZwbXjG48 -
On the global thing... the reason that works is because it forces del to be called on older images that are no longer being used. Elsewhere in this forum you can find discussion that the garbage collector is not fast enough at deleting images when cycling thru lots of images. You can also fix this issue by calling del yourself but I like the elegance of the global solution.
-
@jmv38 Hi, very interesting, congratulations and thank you for your explanations!
Unfortunately I can't use your script due to little screen of my 5s (no, I haven't an iPad).
I'd like to ask you some things, if you are interested and have time:
- would it be too difficult for you to modify your script in order to allow user to swipe/move with fingers the full graphical panel of your script in little idevices to be able to see and access the full sub graphical views ("Prepare the data", "Train the model", etc...)? I think that by using the powerful scripts written by @mikael it could be possible, but really I don't know how.
- could you think a good proposal to add a "delete" button for each single square in order to delete draw inside only one square, instead of to delete all by touching the Reset!! key?
- in my opinion it would be nice to implement (but I repeat only if you are interested and have time) a way to have a grid (with adjustable dimensions) for when user draws very simple things on squares, only to test the algorithm also with simple draws like for example 2x3 big pixels images.
Thank you
Regards -
@ccc not quite sure what you reffer to.
-
@Matteo thanks
1/ and 3/ are quite some work and are not in my top priorities now.
For 1/ maybe you could modify the code to fit your needs?
2/ is easy, i could add it, i’ll put that in a future version. -
@Matteo A quick and dirty solution that should work for any future version of @jmv38 code:
- replace mv = ... by
#mv = ui.View(canvas_size, canvas_size) mv = ui.ScrollView(canvas_size, canvas_size)
- add this at the end
mv.present('full_screen', orientations='landscape') #=== added wm = mv.width hm = mv.height for sv in mv.subviews: wm = max(wm,sv.x+sv.width) hm = max(hm,sv.y+sv.height) #print(w,h,wm,hm) mv.content_size = (wm,hm) mv.scroll_enabled = False scroll_right = ui.ButtonItem() scroll_right.title = '➡️' def scroll_right_action(sender): ws,hs = mv.content_offset ws = min(ws+w/2,wm-w) mv.content_offset = (ws,hs) scroll_right.action = scroll_right_action scroll_left = ui.ButtonItem() scroll_left.title = '⬅️' def scroll_left_action(sender): ws,hs = mv.content_offset ws = max(ws-w/2,0) mv.content_offset = (ws,hs) scroll_left.action = scroll_left_action scroll_bottom = ui.ButtonItem() scroll_bottom.title = '⬇️' def scroll_bottom_action(sender): ws,hs = mv.content_offset hs = min(hs+h/2,hm-h) mv.content_offset = (ws,hs) scroll_bottom.action = scroll_bottom_action scroll_top = ui.ButtonItem() scroll_top.title = '⬆️' def scroll_top_action(sender): ws,hs = mv.content_offset hs = max(hs-h/2,0) mv.content_offset = (ws,hs) scroll_top.action = scroll_top_action mv.right_button_items = [clearAll_button, scroll_right, scroll_left, scroll_bottom, scroll_top]
You will have 4 menu buttons to scroll in the 4 directions
I hope it works because I was not able to test on an iPhone 5s but it is ok on my iPad mini 4 in portrait mode where I don't see all like you.Some labels of code do not have a width, thus set as 1024, that's the reason why you can scroll too much at right...
Put this alignment line to check
lb.text='OK now lets see if it can Guess right' lb.alignment = ui.ALIGN_RIGHT
-
@Matteo Better solution, use this code at end, to swipe with two fingers...
#=== added wm = mv.width hm = mv.height for sv in mv.subviews: wm = max(wm,sv.x+sv.width) hm = max(hm,sv.y+sv.height) #print(w,h,wm,hm) mv.content_size = (wm,hm) mv.scroll_enabled = True mvo = objc_util.ObjCInstance(mv) mvo.panGestureRecognizer().setMinimumNumberOfTouches_(2) mvo.panGestureRecognizer().setMaximumNumberOfTouches_(2)
Tested on iPhone 5s
Don't forget to replace mv = ui.View by ui.ScrollView
-
Hi @cvp, wondeful, it works perfectly! Very exciting! Now I can use the jmv38 code also in my little phone :-)
Unfortunately when I touch button 1/ Train after adding the draws the code tells me "Type Error: integer argument expected, got float".
I will perform some tests and if needed I will post here the full traceback.
Thank you again
Regards -
@cvp hello
i tried your code: it works fine on my ipad. No error.
But i can scroll only horizontally. How could i scroll vertically too?
thanks. -
@jmv38 I think your view does not need to scroll vertically.
If I add a label at y=1000, vertical scroll with two fingers works -
-
-
@Matteo for your request 2/: it already works. Dont press reset, just start another drawing in one of the boxes and it will replace the previous one. Then tap train.