I don't know if anyone's interested, but I took a combination of the Sketch example, along with some other examples I've seen online, and come up with the following.
It's a simple Neural Network class, prepare data, train it and it can hopefully guess what you draw to test it with.
- Draw three positive image, all the same. I.e. Draw three smiley faces.
- Then draw three negative images, all the same. I.e. Draw three sad faces.
- Then press the Train button.
- OK so now it's ready. Draw either a copy of the positive or negative image, and see if it gets it right 😀
I don't claim this is the best code ever, or even efficient algorithm and could be tuned much better, but thought I'd share it anyway 😀
This post is deleted!last edited by
@mkeywood Nice code, I've tried, not always successfully but I like it....
@mkeywood thank you! Incredible that it is so simple to make some NN. You demystified it.
How can you add layers?
Can you make CNN too?
Currently it’s quite simple and so not massively extensible for things like CNN. Although that is the area I’m looking at now 😄
As for additional layers, we could add them relatively easy. Basically it’s all geared around variables W1 and W2 that are the weights between the input and hidden layer, and hidden and output layers respectively.
Extending the init, forward, backward etc should be easy enough to extend.
Something else I can look at 😄
@cvp, thanks 😄 My daughter played with lots of different sketches and found some more reliable than others.
@mkeywood i have tried to add some robustness: i shift the drawings in 9 position for the training. I had to decrease the learning rate by x0.1. Is it ok? what do you think?
Not sure the performance is better.
note: I also changed the layout because i work in landscape.
I have changed the learning rate to 0.02 and increased the number of training epochs to 200: i am getting good results now.
@mkeywood really having fun with your code, thanks so much for sharing!
Now i made some more changes: using 2 neurons in the output, to get independant estimates and reliability (and also to see the result with a 3rd template that is not 1 or 2).
I have also added a small rotation in the training, and tweeked a little bit the training parameters.
update: i have made a Layer class to more easily add layers. I have now 4 layers. It seems to work, but i have not checked if the undeground maths are correct. I’ve assumed your formula to backpropagate the error is recursive.
With this implementation you can use the number of layers you want, with the number of neurons you want inside.
@jmv38 this is awesome!!
Glad it was helpful, but you've taken it to another level completely 😀
That's some really great additions. Really amazing. I look forward to see what you do next 😀