I don't know if anyone's interested, but I took a combination of the Sketch example, along with some other examples I've seen online, and come up with the following.
It's a simple Neural Network class, prepare data, train it and it can hopefully guess what you draw to test it with.
- Draw three positive image, all the same. I.e. Draw three smiley faces.
- Then draw three negative images, all the same. I.e. Draw three sad faces.
- Then press the Train button.
- OK so now it's ready. Draw either a copy of the positive or negative image, and see if it gets it right 😀
I don't claim this is the best code ever, or even efficient algorithm and could be tuned much better, but thought I'd share it anyway 😀
Please help wanted!
I am stuck on a bug and i cant find what i am doing wrong.
here is the gist https://gist.github.com/3af5cf10e59944648ee38d3628282324
it runs fine. But i have tried to move a small piece of code to a class and then i crash all the time. no idea what is wrong.
To see the bug happen, replace False by True in this line 294
testBug = False # set True to show the bug
with False i directly execute the code, with True i use the piece embedded in the SketchView class.
the problem seems to appear when i ask the SketchView instance to remember an image: line 230
self.sImage = pil_image
can anyone help me?
@jmv38, if Pythonista crashes, google for ”dgelessus faulthandler” to get the ObjC exception. If it is not a Pythonista crash, what is the trace?
it is a pythonista crash
note that the images stored are very small (25x25) and only 3 SketchView objects are using them. So it cannot be memory overflow.
I am probably doing something very wrong somewhere, but what?
What is puzzling is that the very same code, executed 200 times works fine. I try to execute it only 3 times, and re-use the result, but then it does not work...
Note that it works the 1rst time. This code never crashes
def run(self): global X, y, pts, NN n = len(self.vars) count = self.count if count<n: if count == 3: exit()
the crash occurs ramdomly during one of the next calls. Not always the same. I must be writing in the memory at a wrong place.
Crash, with faulthandler, gives an empty faultlog-temp.txt...strange
@jmv38 Not sure if that helps but no crash with
def getImg(self): if self.sImage == None: pil_image = ui2pil(snapshot(self.subviews)) _,_,_,pil_image = pil_image.split() pil_image = pil_image.resize((100, 100),PILImage.BILINEAR) pil_image = pil_image.resize((50, 50),PILImage.BILINEAR) pil_image = pil_image.resize((25, 25),PILImage.BILINEAR) return pil_image.copy() <------------------- self.sImage = pil_image return self.sImage.copy()
@cvp thank you, i feel less alone...
your last proposal is not a solution to the problem: the sImage is never updated, so it is recomputed at each cycle, that is what want to avoid.
I just tried some more changes (slow down, predefine self.sImage), but nothing works.
Must be something stupid (a bad local name, messing with a global?). Or are the some memory bugs in pythonista?
I think i must be degrading self.sImage, but how? i return a copy, not the image itself, and i dont modify sketch during learning...
@jmv38 this shows, on my iPad mini 4, that crash arrives after preparing 71/243
if testBug: pil_image = v.getImg() time.sleep(0.05)
Preparing 74/243... .....always 74 even with time.sleep(0.5)
@jmv38 Not sure if this modification does destroy the algorithm
- in updateLearninImage
BWlearningSetImage = temp#.convert('RGB')
- in showTraining
global BWlearningSetImage BWlearningSetImage = BWlearningSetImage.convert('RGB')
@cvp does it solve the bug?
@cvp no crash, you are right!
Incredible that you found that.
Any insight of what is going on there?
@cvp thank you so much for solving my problem!
I wish i understood what was wrong in my code, though...
@jmv38 I'm just happy to have been able to help.
Sincerely, I don't understand all your code but I've tried to follow it, step by step by skipping some process until I found this "solution". I agree that it does not explain the problem
Doing the conversion at end is less work, I think, because not converted at each iteration.
Perhaps, a problem of cpu consumption
big pef improvement with prior normalization of image size and position
Huge update! Now you can inspect the states and weights of the internal layers and get a feeling of how the network decides!
Also I added a copyright because this is a lot of work. @mkeywood if you are uncomfortable with this let me know.
Here is a video showing the code in action
300 views and not a word...???
Hey guys (and gals), if you like the code i share, some encouragement is always welcome!
v14 colors modified to better understand the network computation.
now it is much easier.
Can you explain what is happening in the bottom set of plots?
the bottom plot shows:
during training: the training set and the color of each element corresponds to the recongnition result of this element.
during guessing (what you can se above):
the states of input layer0, then weights 0>1, then states of layer1, then weights 1>2, then states of layer2, then weights 2>3, then states of layer2, then output layer3 = 3 neurons.
First i display the weights in black and white (white=positive, black=negative), then i display in color and in sequence the signal propagation through the network:
- the neuron states: green means active (1) and red means inactive (0).
- neurons x weights values: Green means positive influence on the neuron (excitation) and red negative influence (damping). The brighter the stronger. I.ll call this wxn for weights x neuron.
The colors of the 3 neurons of layer 3 (red,red,green) are the same under the sketches: they correspond to the same data.
the group of 3 blocs on the left of these neurons (weights2>3) are the input weights of each neuron x the input of previous layer.
Let’s analyse this picture
Here you can see that a single wxn is mainly responsible for the 3 neuron states: its the weight (1/2, 0/2) ((0,0) is top left of each bloc), excited by neuron (1/2, 0/2) (green). Other neurons are desactivated (red) so the wxn are insignificant (black). This neuron is damping the 2 first neurons (red wxn) and exciting the last one (green wxn).
Now let’s see why neuron (1/2, 0/2) of layer2 is excited. Look in wxn1>2 at the bloc (1/2, 0/2). Several wxn are green, they are responsible for the response. There is no opposite response (red).
Let’s look at the stonger response wxn (2/4,3/4). The previous layer neuron corresponding is green too (active). look at the corresponding wxn0>1: you can see the the top left part of the ‘o’ drawing is green = detected.
so we can say the ‘o’ has been detected because of its top left part, which is not present in the 2 other drawings. That makes sense.
And the 2 other choices have been rejected for the same reason (it might not be the case).
I hope this explaination is what you were expecting.