omz:forum

    • Register
    • Login
    • Search
    • Recent
    • Popular

    Welcome!

    This is the community forum for my apps Pythonista and Editorial.

    For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.


    Machine Learning

    Pythonista
    neural network machine learnin
    8
    68
    36226
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • jmv38
      jmv38 last edited by jmv38

      it is a pythonista crash

      1 Reply Last reply Reply Quote 0
      • jmv38
        jmv38 last edited by jmv38

        note that the images stored are very small (25x25) and only 3 SketchView objects are using them. So it cannot be memory overflow.
        I am probably doing something very wrong somewhere, but what?
        What is puzzling is that the very same code, executed 200 times works fine. I try to execute it only 3 times, and re-use the result, but then it does not work...
        Note that it works the 1rst time. This code never crashes

          def run(self):
            global X, y, pts, NN
            n = len(self.vars)
            count = self.count
            if count<n:
              if count == 3: exit()
        

        the crash occurs ramdomly during one of the next calls. Not always the same. I must be writing in the memory at a wrong place.

        cvp 1 Reply Last reply Reply Quote 0
        • cvp
          cvp last edited by

          Crash, with faulthandler, gives an empty faultlog-temp.txt...strange

          1 Reply Last reply Reply Quote 0
          • cvp
            cvp @jmv38 last edited by

            @jmv38 Not sure if that helps but no crash with

              def getImg(self):
                if self.sImage == None:
                  pil_image = ui2pil(snapshot(self.subviews[0]))
                  _,_,_,pil_image = pil_image.split()
                  pil_image = pil_image.resize((100, 100),PILImage.BILINEAR)
                  pil_image = pil_image.resize((50, 50),PILImage.BILINEAR)
                  pil_image = pil_image.resize((25, 25),PILImage.BILINEAR)
                  return pil_image.copy() <-------------------
                  self.sImage = pil_image
                return self.sImage.copy()
            
            jmv38 1 Reply Last reply Reply Quote 0
            • jmv38
              jmv38 @cvp last edited by jmv38

              @cvp thank you, i feel less alone...
              your last proposal is not a solution to the problem: the sImage is never updated, so it is recomputed at each cycle, that is what want to avoid.
              I just tried some more changes (slow down, predefine self.sImage), but nothing works.
              Must be something stupid (a bad local name, messing with a global?). Or are the some memory bugs in pythonista?
              I think i must be degrading self.sImage, but how? i return a copy, not the image itself, and i dont modify sketch[] during learning...

              cvp 2 Replies Last reply Reply Quote 0
              • cvp
                cvp @jmv38 last edited by cvp

                @jmv38 this shows, on my iPad mini 4, that crash arrives after preparing 71/243

                    if testBug:
                      pil_image = v.getImg()
                      time.sleep(0.05)
                

                Preparing 74/243... .....always 74 even with time.sleep(0.5)

                1 Reply Last reply Reply Quote 0
                • cvp
                  cvp @jmv38 last edited by

                  @jmv38 Not sure if this modification does destroy the algorithm

                  1. in updateLearninImage
                    BWlearningSetImage = temp#.convert('RGB')
                  
                  1. in showTraining
                      global BWlearningSetImage
                      BWlearningSetImage = BWlearningSetImage.convert('RGB')
                  
                  jmv38 3 Replies Last reply Reply Quote 0
                  • jmv38
                    jmv38 @cvp last edited by

                    @cvp does it solve the bug?

                    1 Reply Last reply Reply Quote 0
                    • jmv38
                      jmv38 @cvp last edited by

                      @cvp no crash, you are right!
                      Incredible that you found that.
                      Any insight of what is going on there?

                      1 Reply Last reply Reply Quote 0
                      • jmv38
                        jmv38 @cvp last edited by

                        @cvp thank you so much for solving my problem!
                        I wish i understood what was wrong in my code, though...

                        cvp 1 Reply Last reply Reply Quote 0
                        • cvp
                          cvp @jmv38 last edited by cvp

                          @jmv38 I'm just happy to have been able to help.
                          Sincerely, I don't understand all your code but I've tried to follow it, step by step by skipping some process until I found this "solution". I agree that it does not explain the problem

                          Doing the conversion at end is less work, I think, because not converted at each iteration.
                          Perhaps, a problem of cpu consumption

                          1 Reply Last reply Reply Quote 0
                          • jmv38
                            jmv38 last edited by

                            v12 https://gist.github.com/8fa3ac1516ef159284f3090ba9494390
                            big pef improvement with prior normalization of image size and position

                            1 Reply Last reply Reply Quote 0
                            • jmv38
                              jmv38 last edited by jmv38

                              Huge update! Now you can inspect the states and weights of the internal layers and get a feeling of how the network decides!
                              https://gist.github.com/ef4439a8ca76d54a724a297680f98ede

                              Also I added a copyright because this is a lot of work. @mkeywood if you are uncomfortable with this let me know.

                              Here is a video showing the code in action
                              https://youtu.be/yBR80KwYtcE

                              mkeywood 1 Reply Last reply Reply Quote 2
                              • jmv38
                                jmv38 last edited by jmv38

                                screenshot

                                1 Reply Last reply Reply Quote 1
                                • jmv38
                                  jmv38 last edited by

                                  300 views and not a word...???
                                  Hey guys (and gals), if you like the code i share, some encouragement is always welcome!
                                  Thanks.

                                  1 Reply Last reply Reply Quote 0
                                  • jmv38
                                    jmv38 last edited by jmv38

                                    v14 colors modified to better understand the network computation.
                                    now it is much easier.
                                    https://gist.github.com/94a8d1474a6ef6e49972518baa730f1b

                                    Matteo 1 Reply Last reply Reply Quote 0
                                    • JonB
                                      JonB last edited by

                                      Can you explain what is happening in the bottom set of plots?

                                      jmv38 1 Reply Last reply Reply Quote 0
                                      • jmv38
                                        jmv38 @JonB last edited by jmv38

                                        @JonB hello.
                                        the bottom plot shows:

                                        during training: the training set and the color of each element corresponds to the recongnition result of this element.

                                        during guessing (what you can se above):
                                        the states of input layer0, then weights 0>1, then states of layer1, then weights 1>2, then states of layer2, then weights 2>3, then states of layer2, then output layer3 = 3 neurons.

                                        First i display the weights in black and white (white=positive, black=negative), then i display in color and in sequence the signal propagation through the network:

                                        • the neuron states: green means active (1) and red means inactive (0).
                                        • neurons x weights values: Green means positive influence on the neuron (excitation) and red negative influence (damping). The brighter the stronger. I.ll call this wxn for weights x neuron.

                                        The colors of the 3 neurons of layer 3 (red,red,green) are the same under the sketches: they correspond to the same data.
                                        the group of 3 blocs on the left of these neurons (weights2>3) are the input weights of each neuron x the input of previous layer.

                                        Let’s analyse this picture

                                        Here you can see that a single wxn is mainly responsible for the 3 neuron states: its the weight (1/2, 0/2) ((0,0) is top left of each bloc), excited by neuron (1/2, 0/2) (green). Other neurons are desactivated (red) so the wxn are insignificant (black). This neuron is damping the 2 first neurons (red wxn) and exciting the last one (green wxn).

                                        Now let’s see why neuron (1/2, 0/2) of layer2 is excited. Look in wxn1>2 at the bloc (1/2, 0/2). Several wxn are green, they are responsible for the response. There is no opposite response (red).

                                        Let’s look at the stonger response wxn (2/4,3/4). The previous layer neuron corresponding is green too (active). look at the corresponding wxn0>1: you can see the the top left part of the ‘o’ drawing is green = detected.

                                        so we can say the ‘o’ has been detected because of its top left part, which is not present in the 2 other drawings. That makes sense.
                                        And the 2 other choices have been rejected for the same reason (it might not be the case).

                                        I hope this explaination is what you were expecting.

                                        1 Reply Last reply Reply Quote 2
                                        • jmv38
                                          jmv38 last edited by

                                          here is a video that is the same explanation
                                          https://youtu.be/L-6ZwbXjG48

                                          1 Reply Last reply Reply Quote 0
                                          • ccc
                                            ccc last edited by

                                            On the global thing... the reason that works is because it forces del to be called on older images that are no longer being used. Elsewhere in this forum you can find discussion that the garbage collector is not fast enough at deleting images when cycling thru lots of images. You can also fix this issue by calling del yourself but I like the elegance of the global solution.

                                            jmv38 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Powered by NodeBB Forums | Contributors