Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Capturing still images within my scene
-
Hi,
I'm trying to make a small application which allows me to capture an image, process the image through the Microsoft Azure vision API and then give back the recognized text.
I'm struggeling to make my LiveCameraView to take a picture and save it.
My references:
AVFoundationPG - Media Capture
AVCapturePhotoOutput
Original codefrom objc_util import * import console import ui class LiveCameraView(ui.View): def __init__(self,device=0, *args, **kwargs): self.background_color = 'white' ui.View.__init__(self,*args,**kwargs) # Define a session, used to serve as a placeholder for the input and output + definition of quality self._session=ObjCClass('AVCaptureSession').alloc().init() # Set the quality of the capture self._session.setSessionPreset_('AVCaptureSessionPresetMedium'); # Select an input device inputDevices=ObjCClass('AVCaptureDevice').devices() self._inputDevice=inputDevices[device] #self._inputDevice.unlockForConfiguration() #self._inputDevice.setFocusMode_(ns('AVCaptureFocusModeContinuousAutoFocus')) # Enable autofocus if self._inputDevice.isFocusModeSupported_(2): if self._inputDevice.lockForConfiguration_(None): self._inputDevice.focusMode = 2 self._inputDevice.unlockForConfiguration() # Add the device to your session deviceInput=ObjCClass('AVCaptureDeviceInput').deviceInputWithDevice_error_(self._inputDevice, None); # Configure device output deviceOutput = ObjCClass('AVCapturePhotoOutput').alloc().init() photoSettings = ObjCClass('AVCapturePhotoSettings') # FROM HERE ON, I'M LOST deviceConnection = ObjCClass('AVCaptureConnection') if self._session.canAddInput_(deviceInput): self._session.addInput_(deviceInput) if self._session.canAddOutput_(deviceOutput): self._session.addOutput_(deviceOutput) self._previewLayer=ObjCClass('AVCaptureVideoPreviewLayer').alloc().initWithSession_(self._session) self._previewLayer.setVideoGravity_( 'AVLayerVideoGravityResizeAspectFill') rootLayer=ObjCInstance(self).layer() rootLayer.setMasksToBounds_(True) self._previewLayer.setFrame_( CGRect(CGPoint(-70, 0), CGSize(self.height,100))) rootLayer.insertSublayer_atIndex_(self._previewLayer,0) self._session.startRunning() b = ui.Button(title = 'Scan code') b.action = self.snap b.center = (self.width * 0.5, self.height * 0.5) b.flex = 'LRTB' self.add_subview(b) def snap(self, sender): console.alert('snap','snap') def will_close(self): self._session.stopRunning() def layout(self): if not self._session.isRunning(): self._session.startRunning() rootview=LiveCameraView(frame=(0,0,200,500)) rootview.present('popover')
Can someone help me to capture the image and process it so I'm be able to send it using
requests
?Thank you in advance,
Kind regards,
Gilles
-
@b0hr see @JonB real original code
which is the first line of the script linked by your "original code"