@cvp Easier In hindsight yes. But there will always be people faster than me, stronger than me, and in your case clearer-thinking than me, if not also faster and stronger. I now recall disliking the need for global in another program wot I wrote. In this case I think I was also frightened at the complexity of the ObjC.
Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Posts made by AlanE
-
RE: Take photo without 'Use' step, join ObjC with Photos module
-
RE: Take photo without 'Use' step, join ObjC with Photos module
@cvp Oh yes! that edit with the return line works. Completely over the top of my head but thankfully there are people who can. Thank you again.
-
RE: Take photo without 'Use' step, join ObjC with Photos module
@cvp Oh crikey. I fear I am lost. Does that mean that ui_image cannot come out of the ObjC for use elsewhere?
-
RE: Take photo without 'Use' step, join ObjC with Photos module
Thank you but I am afraid I need my hand held some more. The block of four lines which I presume replace the block of four lines in the handler function does indeed show the image. But the same apparently identical show() repeated outside of the ObjC def fails. I have put comments at the end of the troublesome line.
from objc_util import * import time import threading import ui import photos C = ObjCClass def take_photo_now(filename='photo.jpg'): session = C('AVCaptureSession').new().autorelease() session.sessionPreset = 'AVCaptureSessionPresetPhoto' device = C('AVCaptureDevice').defaultDeviceWithMediaType_('vide') device_input = C('AVCaptureDeviceInput').deviceInputWithDevice_error_(device, None) session.addInput_(device_input) image_output = C('AVCaptureStillImageOutput').new().autorelease() session.addOutput_(image_output) session.startRunning() # NOTE: You may need to adjust this to wait for the camera to be ready (use a higher number if you see black photos): time.sleep(0.1) def handler_func(_block, _buffer, _err): buffer = ObjCInstance(_buffer) img_data = C('AVCaptureStillImageOutput').jpegStillImageNSDataRepresentation_(buffer) ui_image = ui.Image.from_data(nsdata_to_bytes(img_data)) ui_image.show() # this show() works #img_data.writeToFile_atomically_(filename, True) e.set() video_connection = None for connection in image_output.connections(): for port in connection.inputPorts(): if str(port.mediaType()) == 'vide': video_connection = connection break if video_connection: break e = threading.Event() handler = ObjCBlock(handler_func, restype=None, argtypes=[c_void_p, c_void_p, c_void_p]) retain_global(handler) image_output.captureStillImageAsynchronouslyFromConnection_completionHandler_(video_connection, handler) e.wait() # my code calls the take_photo_now def above then wishes to go on to ImageDraw on top of it – take_photo_now('photo.jpg') ui_image.show() #this apparently identical show() outside of the subroutine does not work. "name' ui_images' is not defined", or if I declare at the top with "ui_image=None" then it errors with "NoneType' object has no attribute 'show'". the show() in the ObjC continues to work on both variations.
-
Take photo without 'Use' step, join ObjC with Photos module
I seek help on merging an ObjC bridge program, which Ole offered some time ago, with the standard photos.capture_image
These three lines of my coding work how I want but the iOS insists on me pressing the camera button, then touching 'Use Photo'. I can then work with the output it gives.
import photos theImage=photos.capture_image() theImage.show()
So I found Ole's excellent Objective-C bridge program which takes a photo when told to by the program, quoted below –
from objc_util import * import time import threading C = ObjCClass def take_photo_now(filename='photo.jpg'): session = C('AVCaptureSession').new().autorelease() session.sessionPreset = 'AVCaptureSessionPresetPhoto' device = C('AVCaptureDevice').defaultDeviceWithMediaType_('vide') device_input = C('AVCaptureDeviceInput').deviceInputWithDevice_error_(device, None) session.addInput_(device_input) image_output = C('AVCaptureStillImageOutput').new().autorelease() session.addOutput_(image_output) session.startRunning() # NOTE: You may need to adjust this to wait for the camera to be ready (use a higher number if you see black photos): time.sleep(0.1) def handler_func(_block, _buffer, _err): buffer = ObjCInstance(_buffer) img_data = C('AVCaptureStillImageOutput').jpegStillImageNSDataRepresentation_(buffer) img_data.writeToFile_atomically_(filename, True) e.set() video_connection = None for connection in image_output.connections(): for port in connection.inputPorts(): if str(port.mediaType()) == 'vide': video_connection = connection break if video_connection: break e = threading.Event() handler = ObjCBlock(handler_func, restype=None, argtypes=[c_void_p, c_void_p, c_void_p]) retain_global(handler) image_output.captureStillImageAsynchronouslyFromConnection_completionHandler_(video_connection, handler) e.wait() take_photo_now('photo.jpg') import console console.quicklook('photo.jpg')
But, hopelessly I cannot work out how to change the ObjC code to return a reference to the photo which my code can handle. That is to say that I don't know how to get 'photo.jpg' into my theImage (variable?) so I can show() it, and use it in the Image and ImageDraw modules. I wish for it happen quickly so I guess it needs to stay in a buffer and not be delayed by saving to the camera roll.
Many thanks.
-
Very happy. On the App Store.
Not a question but praise for the Xcode template for Pythonista 3. I had my (quite specialised) app accepted by Apple without much difficulty. A learning curve with satisfying outcome.
Great work from OMZ. -
RE: A Working Directory in Xcode
I solved it. Create a folder in the same location as the script. Write the files into that. Works in Pythonista and more importantly in apps (USB installed) as a test from Xcode.
-
A Working Directory in Xcode
Firstly. Pythonista: what a fantastic app!
And now the new Xcode template for version 3 is superb too... but I am struggling to write persistent text files from my Pythonista3 code when in Xcode simulator or when I test it after building it onto an attached (iPad) device.
In PY3 the code works as I would expect - writing .txt files to the same location as the code, which persist for reading back in after PY3 is closed and reopened. But from Xcode the files read OK but apparently do not write/update (no error is reported). When the Xcode test app on the iPad is reopened the text file does not reflect the earlier .write() changes either
I am wondering if it is to do with having no working directory option in Xcode:Edit Scheme:Options which I see in other Xcode examples
Any help much appreciated.
Alan
-
RE: Help with moving button to subview
Splendid. Yes indeed, you understood enough of what I sought to be able to demonstrate that add_subview was the answer. I thought that was reserved for initialisation only. Great, thanks for taking the trouble to code that for me.
-
RE: Help with moving button to subview
I should clarify. The view button is to become a button in the scroll view so that it scrolls with them.
Thanks -
Help with moving button to subview
Might I ask for help please in moving a button to a scroll view?
On the iPhone I have a standard View which contains buttons, and a scroll view also containing buttons.
I wish to touch the button on the view, then touch the button in the scroll so that the first button moves to the x,y of the button in the scroll area.
I am not using a PYUI, all programmatically which I can achieve until the scroll is introduced.
I can remove the button using #view.remove_subview(buttonVariable) but cannot move it.
Help appreciated