I don't know if anyone's interested, but I took a combination of the Sketch example, along with some other examples I've seen online, and come up with the following.
It's a simple Neural Network class, prepare data, train it and it can hopefully guess what you draw to test it with.
- Draw three positive image, all the same. I.e. Draw three smiley faces.
- Then draw three negative images, all the same. I.e. Draw three sad faces.
- Then press the Train button.
- OK so now it's ready. Draw either a copy of the positive or negative image, and see if it gets it right 😀
I don't claim this is the best code ever, or even efficient algorithm and could be tuned much better, but thought I'd share it anyway 😀
@cvp, thanks 😄 My daughter played with lots of different sketches and found some more reliable than others.
@mkeywood i have tried to add some robustness: i shift the drawings in 9 position for the training. I had to decrease the learning rate by x0.1. Is it ok? what do you think?
Not sure the performance is better.
note: I also changed the layout because i work in landscape.
I have changed the learning rate to 0.02 and increased the number of training epochs to 200: i am getting good results now.
@mkeywood really having fun with your code, thanks so much for sharing!
Now i made some more changes: using 2 neurons in the output, to get independant estimates and reliability (and also to see the result with a 3rd template that is not 1 or 2).
I have also added a small rotation in the training, and tweeked a little bit the training parameters.
update: i have made a Layer class to more easily add layers. I have now 4 layers. It seems to work, but i have not checked if the undeground maths are correct. I’ve assumed your formula to backpropagate the error is recursive.
With this implementation you can use the number of layers you want, with the number of neurons you want inside.
@jmv38 this is awesome!!
Glad it was helpful, but you've taken it to another level completely 😀
That's some really great additions. Really amazing. I look forward to see what you do next 😀
@mkeywood here is an update:
i have made 3 inputs, many duplicates, and some live feedback during learning. Works not bad sometimes.
another one: now i display the learning set while creating it
v08: added a white image and a random image should return 0: to improve robustness
@jmv38 Hi interesting project! I'd like to ask you two things about it:
- could you easly modify your script in order to allow execution also on little screens (4 in)? Does anyone here (Pythonista forum) know a general way to modify easly a script with UI in order to adapt it automatically according to screen size (through the automatic recognition of the screen size of the device where the script is executed)?
- only for fun, if you are interested: how about a script (by following your original script) that tries to learn to play tic-tac-toe game? For example with random choice of moves at beginning and positive weight to set of moves for winner player in several matches, in order to create a set of moves (getting closer to being the best ones) for each situation? What would you suggest?
Thank you and feel free to share some reasoning about it.
Note that mkeywood made the original programm and ui layout. I just made a set of small changes each time that lead me here.
For you questions:
1/ the ui part is at the bottom of the script. You can change the numbers and the layout to match your screen definition. That is some work though (1 hour or less).
2/ that would be quite some thinking to do that. For the moment i am just doing simple things to learn python, by tweaking mkeywood programm, so it is beyond me.
Anyway thank you for the answer. Some times ago I started for fun to study something about ML, and the first test example in my mind was an algorithm able to learn how to play a simple game like tictactoe, without studying any python specific library for ML.
The interesting thing in my opinion is how to create a general algorithm able to learn something without any big python libraries, only as a concept proof and with some little constraints defined by user for the research of the Ml goal/goals. The constraints could change in the algorithm when some situations occur during calculation. So thank you again both for your work, maybe it could give me some technical info for the ML game solver I've in mind.
1/ cleaned up some code
2/ live color feedback during training on samples
v10: 1/ added a [learn more] button to ... learn more.
@jmv38, looks like some great updates. I look forward to looking over them this weekend :)
Please help wanted!
I am stuck on a bug and i cant find what i am doing wrong.
here is the gist https://gist.github.com/3af5cf10e59944648ee38d3628282324
it runs fine. But i have tried to move a small piece of code to a class and then i crash all the time. no idea what is wrong.
To see the bug happen, replace False by True in this line 294
testBug = False # set True to show the bug
with False i directly execute the code, with True i use the piece embedded in the SketchView class.
the problem seems to appear when i ask the SketchView instance to remember an image: line 230
self.sImage = pil_image
can anyone help me?
@jmv38, if Pythonista crashes, google for ”dgelessus faulthandler” to get the ObjC exception. If it is not a Pythonista crash, what is the trace?
it is a pythonista crash
note that the images stored are very small (25x25) and only 3 SketchView objects are using them. So it cannot be memory overflow.
I am probably doing something very wrong somewhere, but what?
What is puzzling is that the very same code, executed 200 times works fine. I try to execute it only 3 times, and re-use the result, but then it does not work...
Note that it works the 1rst time. This code never crashes
def run(self): global X, y, pts, NN n = len(self.vars) count = self.count if count<n: if count == 3: exit()
the crash occurs ramdomly during one of the next calls. Not always the same. I must be writing in the memory at a wrong place.
Crash, with faulthandler, gives an empty faultlog-temp.txt...strange