omz:forum

    • Register
    • Login
    • Search
    • Recent
    • Popular

    Welcome!

    This is the community forum for my apps Pythonista and Editorial.

    For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.


    Gestures for Scene

    Pythonista
    4
    12
    6463
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dcl
      dcl last edited by

      Hello all,

      About a year ago I started looking for a solution to gesture recognition for a Pythonista Scene module. At that time, I was able to find the wonderful Gestures module for UI Views by @mikael but I was unable to use this in the Scene module (game). I was also unable to find anything that fit the need at the time, so I began crafting my own bit of code inspired by Gestures but for the Scene module using only Python code and the scene.touches interface (not hooking back into the objC code that the Gestures module uses).

      Some of the history of this project can be found at this forum post:

      https://forum.omz-software.com/topic/4624/help-gestures-for-scene-pythonista-debugging

      But I am pleased to say that while the code hasn't progressed much since that post, I have now gotten the code broken into individual files (instead of the monolithic code blob posted previously) and put on GitHub here:

      https://github.com/dlazenby/PythonistaSceneGestures

      Because this continues to be developed completely in my free time, and is fairly low on my priority list at the moment, I am providing this on GitHub in hopes that someone with more free time will take an interest and help me bring this project to completion for others to freely use.

      The current status of the project:

      By running the code from the gesture_handler.py file, the user gets a Scene in which user touches can be visualized on-screen individually and data about the "gesture" (grouping and movement of the current touches) can be seen at the top of the screen.

      Gesture data available:
      State (of the recognizer internal state machine)
      Number of Touches present
      Duration of the "gesture" (seconds)
      Translation of the "gesture" (pixels x, pixels y)
      Rotation of the "gesture" (degrees)
      Scale of the "gesture" (multiplier 1.0 = 100%)
      Result of the Recognizer (was the "gesture" recognized, and if so what type)

      There is also logging capability built in. To enable it, uncomment the code block near the top of the gesture_handler.py file. The logger dumps quite a bit of data (at least one message per update loop).

      My intention for the project is to
      (1) Finalize the functionality, making it similar to the Gestures module in that you have an object that, using hook functions placed inside the Scene module's touch_began(), touch_updated(), and touch_ended(), will analyze the touches and when a gesture is recognized will call a user defined callback which was setup on object creation along with the type of gesture to be recognized.
      (2) Possibly move some of the recognizer's functionality onto another thread in order to improve performance (I have very little experience with multi-threaded programs in Python / Pythonista)

      Currently, I am happy with the math which is analyzing the touches, but the schema for recognizing the "gesture" based on that data is not reliable. If someone would like to help troubleshoot what I have or develop a new / better schema for analyzing the data generated about a grouping of touches to recognize it as a gesture, that would be great!

      mikael 1 Reply Last reply Reply Quote 1
      • mikael
        mikael @dcl last edited by

        @dcl, if I remember correctly, the biggest pain with scene and gestures is that touches are only available on the Scene level, and it is a challenge to ”attach” recognizers to individual Nodes in the Scene. Is that right? If yes, is your ambition to do the mapping of touches to Nodes and enable that level of granularity?

        1 Reply Last reply Reply Quote 0
        • dcl
          dcl last edited by

          The touches are only available at the Scene level (technically the top “Node”) as far as I understand it, yes.
          I am not sure if there is currently a way to “attach” recognizers to a given Node at this time, at least not one I could find. And the reason that your Gestures library is incompatible is that the objC calls that it is based on are dependent on a UI object to attach them to, of which there are natively none in a Scene application.
          I thought about wrapping an entire scene application in a ui.sceneview object, but decided that I really didn’t want to do that, so I wrote this bit of code to try and streamline the analysis of the touches in the scene. My intention / ambition is not to attach recognizers to an individual Node (although that could be useful I suppose) as much as it is to actually categorize the touches and their interaction with the scene into recognizable gestures / a consistent set of metadata about the entire interaction and act on that accordingly. This is in contrast to the process today which is more like having to write custom code to pay attention to when and where a single touch began or when “each” touch began, what it / they did while they were on screen, and then when and where it / they ended and then decide if that interaction is something my scene should pay attention to and then code the interaction accordingly.

          A more summarized goal would be “to write code that is reusable between Scene applications that takes touches as an input and generates the user a way to easily react to the entire touch interaction as a single entity”.
          This could be accomplished (as I have started) as a system of gestures recognizers that when recognized they call a callback function, or by dumping the metadata (start location, duration, etc.) as an object and have the user handle that themselves.

          This may be born out of my ignorance of the best way to handle touch interactions given the current Scene API, but I personally would like to be able to write functions for my Scene application to handle certain gestures (call Func_A() if the user swipes across the screen, call Func_B() if the user pinches in/out, call Func_C() if the user taps, etc) and let some other bit of code tell me if that gesture occurred instead of writing a massive if...then block inside the touch_began(), touch_moved(), and touch_ended() functions to test if the touch did something I am interested in and then interact with it. Notably, my Scene application is a bit more complex than something like the included alien platformer, match-3, or bricks examples where the user interaction is fairly simple and well defined. The nodes in my scene react differently to different interactions (zoom in/out, move all nodes as a unit, move one, place / remove one, etc.)

          mikael 1 Reply Last reply Reply Quote 0
          • mikael
            mikael @dcl last edited by

            @dcl: ”And the reason that your Gestures library is incompatible is that the objC calls that it is based on are dependent on a UI object to attach them to, of which there are natively none in a Scene application.”

            There is scene.view. I think you should be able to attach ObjC gestures to that, but have not tried.

            1 Reply Last reply Reply Quote 0
            • mikael
              mikael last edited by mikael

              from scene import *
              import Gestures
              
              class MyScene (Scene):
                  def setup(self):
                      self.ship = SpriteNode('spc:PlayerShip1Orange')
                      self.ship.position = self.size / 2
                      self.add_child(self.ship)
                      Gestures.Gestures().add_pinch(self.view, self.handle_zoom)
              
                  def handle_zoom(self, data):
                    self.scale = data.scale
              
              run(MyScene())
              
              1 Reply Last reply Reply Quote 1
              • dcl
                dcl last edited by

                @mikael, You’re great. Goes to show what I know... haha.

                I’m going to play with this some to see what all I can do with it, but knowing this earlier (and shame on me for missing the fact that Scene had a Scene.view that could that could be attached to) would have saved me a great deal of work on what I have made to solve this issue, although making that code also taught me a lot, so I guess that’s a plus.

                I’ll leave the code on GitHub if anyone is interested in looking at it. I feel like there are some good parts to it, and certainly some parts I am not happy with. Hopefully someone can find some use for it, and I may end up combining parts of it with Gestures to get a final result I am happy with using in a Scene

                —

                @mikael Notably, for whatever reason, Pythonista is giving me continuous NameError: ObjCInstance is not defined when I run the scene script, but not when I run directly from Gestures (your demo code). The code still runs, Pythonista just complains about the NameError the whole time. I’m going to look into this, but any advice is welcome.

                Thanks for your input and your help!

                cvp 1 Reply Last reply Reply Quote 0
                • cvp
                  cvp @dcl last edited by

                  @dcl It's like your "from objc_util import *" was after your use of ObjCInstance...

                  1 Reply Last reply Reply Quote 0
                  • JonB
                    JonB last edited by

                    You might try instantating the gestures object in _init_ rather than setup. Setup happens in a strange thread land, though likely view does not yet exist during init... It might be cleaner to use a sceneview, since that is what run does under the hood -- creates a sceneview then presents it .

                    1 Reply Last reply Reply Quote 0
                    • dcl
                      dcl last edited by dcl

                      @cvp the from objc_util import * is part of the Gestures.py file which was obtained as-is from GitHub and the call is at the top of the file, before any of the code. So unless the files are getting included weird as a part of the Scene setup, I am not sure what might cause that behavior

                      @JonB Thanks for the tip! I tried instantiating the gestures object in the __init__() function of the Scene rather than setup() that seems to work. The key being that you MUST call the Scene.__init__() function inside the MyScene.__init__() function, else the script will throw exceptions of missing members.

                      The revised code is below:

                      from scene import *
                      import Gestures
                      
                      class MyScene (Scene):
                          def __init__(self, **kwargs):
                              Scene.__init__(self, **kwargs)
                              self.GO = Gestures.Gestures()
                      
                          def setup(self):
                              self.ship = SpriteNode('spc:PlayerShip1Orange')
                              self.ship.last_scale = self.ship.scale
                              self.ship.position = self.size / 2
                              self.add_child(self.ship)
                              GO.add_pinch(self.view, self.handle_zoom)
                      
                          def handle_zoom(self, data):
                              self.ship.scale = self.ship.last_scale * data.scale
                                  if data.state is self.Gest.ENDED:
                                      self.ship.last_scale = self.ship.scale
                      
                      run(MyScene())
                      
                      1 Reply Last reply Reply Quote 1
                      • dcl
                        dcl last edited by

                        Having now seen the light by having tested the Gestures
                        code with a Scene, I am convinced that this is the best way to implement what I am trying to do in my Scene applications / scripts. However I am curious if more advanced gesture recognition is possible.

                        I read here about subclassing the objective C UIGestureRecognizer class to implement more advanced or customized gesture recognition.

                        @mikael or @JonB is it possible to subclass UIGestureRecognizer inside Pythonista using objc_util? I read the Pythonista documentation and from my understanding it is possible to subclass Objective C classes in Pythonista, but since the Apple help page specifically mentions having to include this #import "UIGestureRecognizerSubclass.h" before doing the subclassing, I’m skeptical it can be done in Pythonista. And if it is possible, I am still yet unclear on how to properly subclass an Objective C class with Python syntax.

                        1 Reply Last reply Reply Quote 0
                        • JonB
                          JonB last edited by

                          Yes, you can subclass using objc_util.

                          Basically, you create method names as defs (take the objc selector and replace colona with underscores), then use create_objc_class. You would tell it the superclass, and any protocols, and the list of methods. Sometimes you must specify the argtypes and restype, but when overriding existing methods or implementing a protocol, that is not needed because objc_util can figure it out from the name and runtime introspection.

                          There seem to be a lot of methods you need to implement. The main ones of course are the equivalent if touch_began/moves/ended.

                          If you need gestures that play nice with other recognizers like pan, pinch, etc, this might make sense. But I'm not sure if there would be much performance boost, compared to implement your own with the scene touch methods. Read up on the basic state machine idea, and it gives a good indication of how you could frame such a state machine in python as well...

                          1 Reply Last reply Reply Quote 0
                          • JonB
                            JonB last edited by

                            from objc_util import *
                            import ui
                            
                            def touchesBegan_withEvent_(_self,_cmd,touch,event):
                            	print('TOUCH ********\n', ObjCInstance(touch))
                            	print('EVENT ********\n', ObjCInstance(event))
                            	
                            MyGestureRecognizer=create_objc_class('MyGestureRecognizer',ObjCClass('UIGestureRecognizer'), methods=[touchesBegan_withEvent_])
                            
                            g=MyGestureRecognizer.new()
                            
                            v=ui.View()
                            v.objc_instance.gestureRecognizers=ns([g])
                            v.touch_enabled=True
                            v.present('sheet')
                            

                            here is how you would get started.. you then have to figure out which methods you want to implement, and implement those.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Powered by NodeBB Forums | Contributors