How do I make a full-screen button and handle button-down and button-up events?
I want to be able to enter data in a Morse-code-like language while my phone is in my pocket by pushing anywhere on the screen.
I've examined the UI examples, but I am confused as how to separate button-down events and button-up events in separate handlers. Also, since this will be a full-screen button, I might not need to use a "button" at all.
How could this be done?
@technoway, subclass ui.View and implement
touch_endedmethods. These methods get a ui.Touch object, there's some stuff there if you need more fine control.
Thank you mikael and ccc. I am an experience python programmer and I've written a lot of python code that uses Tk, so both your posts helped me immensely.
ccc, I do appreciate the code, that saved me a lot of time looking up details.
I really appreciated it! This is a great forum.
By the way, I used time.perf_counter() to get high resolution times, so I could enter the Morse code at about 20 words-per-minute.
I store "up" and "down" intervals in parallel lists, and process them after having at least one "up interval in the "up" list and when the update handler is in the "down" state for at least 2 seconds - so that when I stop entering Morse code, the message is processed roughly 2 seconds later.
Thanks again for saving me so much time.
Cool!! And you saw https://forum.omz-software.com/topic/1554/accessing-the-led-flashlight ?
May be bit easier with scene.
import scene class MyScene(scene.Scene): def setup(self): self.name = scene.LabelNode('', position=self.size/2, font=('courier', 60), parent=self) def touch_began(self, touch): self.tap_time = self.t def touch_ended(self, touch): self.name.text += '.' if (self.t - self.tap_time) < .2 else '-' scene.run(MyScene())
import scene class MyScene(scene.Scene): def setup(self): self.name = scene.SpriteNode(color=(1,1,1), position=self.size/2, size=self.size, parent=self) self.name.alpha = 0 A = scene.Action self.dot_action = A.sequence( A.fade_to(1, .2), A.fade_to(0, .2)) self.dash_action = A.sequence( A.fade_to(1, .5), A.fade_to(0, .5)) def touch_began(self, touch): self.tap_time = self.t def touch_ended(self, touch): if (self.t - self.tap_time) < .2: self.name.run_action(self.dot_action) else: self.name.run_action(self.dash_action) scene.run(MyScene())
@ccc - Yes, I saw that post about controlling the light. I saw the part about reading Morse Code.
My goal was different. It's to covertly enter information. I really only need to enter a two characters for my application at a time. And, the output is done using text-to-speech,.
The output is not what I entered - my input just controls what the spoken text is.
While I have successfully entered a sentence, the current decoder is very touchy. I build a histogram of "up" times, and "down" times. The "up" histogram is used to determine the threshold between dots and dashes. That works very well. The "down" histogram has to determine a threshold for regular space between symbols, the space between letters, and the space that separates words. That doesn't work well. The problem is that people drift in their sending speed, so these things really need to be dynamically updated - and my program doesn't do that. I don't think the code would help the guy in the "light" topic, particularly for his application. It messes up word spaces right now. Since I'm just entering two letters, that's not an issue.
I'd really like to be able to use a hidden switch that plugs into the iPhone USB port. That could be made smaller than the screen. I'm looking at:
However, the application I have now using a full-screen button is sufficient for my needs.
I've hidden the title (status?) bar, I found out how to do that in another post here. I saw something about adding an image to a button (or "view" in this case) in another post, but I'm having difficulty finding that now. If I can add a full-screen image, I'll do that. That would just be icing on the cake.
I just looked at the "Scene" code above, but I seem to recall just being able to add an image to a regular view.
Eventually, I'll post some code here, although probably not this application.
Pythonista is awesome.
@technoway , my 2 cents worth. Sorry, its not technical help. But it seems to me you are going to great lengths to determine a dash/dot. I would seem to me if you had a self rotating view that had 2 buttons that both consumed 50% of the screen it would be quite easy to learn how to touch each button with a high degree of reliability (100%) for a dot and a dash regardless of how the phone was positioned in your pocket. I cant remember exactly , but I think using objc you can also get tactile feedback (depending on the phone model). So on a long press for example, it maybe could give a vibrate feedback to confirm the orientation you are in.
I also could imagine this approach over time would allow for faster input as you are not concerned about delays and could use 2 finger input.
Don't mean to waste your time, just sometimes we can try to get too technical. Muscle memory is pretty amazing.
I considered two buttons early on, however, that's not necessary. The issue isn't separating dots from dashes. That part works very well - in fact, I don't recall the decoder making an error between a dot and a dash in a very long time.
The code even handles when there are only dots, or only dashes now, such as:
. . . .
which is "se" and:
- - - -
which is "ot"
The issue is the spaces between symbols, letters, and words. Theses are all multiples of the dot-length. If sending a lot of text, and the sending speed changes over time, then the symbol-space, the letter-space, and the word-space, can become ambiguous across the entire message. (Usually, the symbol space is fine, but the letter space and word space can drift close together).
There really needs to be a mix of short-term and long-term tracking of space lengths, so if the sending speed drifts, the code adapts in real time.
The downside of the poor space tracking that sentences can end up as:
"T h e dow nsi deof thep o o r sp ace t ra ckin g th at s en t en ces ca n endup as:"
It's not usually that bad though, the typical case is that a few extra spaces, which are due to word separations, are thrown in the middle of the text. Usually all the letters are correct, because the symbol space is the same as a dot, and that's usually about a third of the letter spacing. If sending very fast, even the symbol space can get messed up though, which results in combining letters, sometimes changing a letter, and sometimes producing an invalid character. Invalid characters are currently silently discarded. That could be improved too, by having the code try to parse the combined invalid symbols into one or more valid characters. (I probably will never do this though, as that's compute-intensive, and requires a dictionary and even could require natural language analysis).
The space threshold problem can be alleviated by exaggerating the separation between letters and words when sending the text, but this sounds unnatural to someone familiar with Morse code. It also feels very unnatural to change the proper flow of Morse code. Imagine reciting letters, and doing really long pauses between letters and really, really long pauses between spelled words.
Also, if I work very hard at having a consistent sending speed, the program does much better. But human beings typically allow their speed to vary, just as they do when speaking, particularly if there are distractions, so that's not a good long-term solution.
The human brain has an easy time doing real-time tracking of the spaces between symbols, letters, and words in Morse code, and adjusting based on sending speed and content.
I think if I work on it long enough, I can get the space tracking to work much better. I have a number of ideas I think that will improve that.
Also, right now, I enter Morse code data on the screen, stop entering data, and then wait two seconds for the system to recognized I've stopped and process all the data. A better system would be real-time decoding as the data flows in, with some buffering to allow estimation of thresholds in real-time.
Another competently different idea I want to try is to be able to draw individual characters on the screen, and having those get recognized, i.e. crude OCR of large hand drawn characters. That would be a fun and challenging project. I saw the Sketch.py sample program. That's a good starting point for the input part of that program. For now though, I want to improve the Morse Code decoder.
@technoway, ok interesting reading. Unfortunately still I am no help. I did get me thinking though. Eg, is morse code only for translation to English? Eg, hard to imagine you could do a Thai translation using morse code. They rarely use spaces at all. You have to understand the language rules how to be able to discern words in a stream of Thai text. LOL, its not easy.
Anyway, these are things I will look up myself. Your answer just prompted me to think of these questions.
@technoway, i was listening to this podcast yesterday and thought about this thread. I am not sure it can be helpful, but I have a feeling it could spark some ideas for you. The title of the podcast is “parsing-horrible-things-with-python”. I feel there is an answer lurking somewhere in this podcast for you.
Thanks. I do know the proper way to solve this problem already though. My undergraduate degree is electrical engineering and I my graduate work is in a field called Digital Signal Processing. I did not implement a more complicated streaming solution because I don't want to load the iPhone down, this is just to control a program that does something else - and currently I only need to enter two sequential characters for all control options, although I have decoded sentences with my current Morse Code decoder.
If I were to implement a full streaming solution, I'd create a separate processing thread to decode the Morse code (or up and down button events with times). The UI would write the up and down events to a thread-safe queue, and the decoder thread would read from that queue. The decode would write characters to a thread-safe output queue, which would be read by the UI thread.
This would allow decoding while doing other processing. Perhaps I'll implement that someday.
Currently there is no point in doing a dynamic adaptation to sending speed variations when I am only decoding very short blocks - currently only two characters.
I did find that podcast interesting and entertaining though. Also, I had purchased the "Natural Language" Python book he mentioned over a year ago. It's a great book. The best part of that book is that it provides sources to various online resources, including a word corpus.