Advice - Process audio as it comes in from mic
I am just looking for some general direction here, is it possible to process incoming audio - lets say for text to speech / pitch recognition / any other type of audio analysis as the data comes in?
My first thought is to look at AVFoundation - looking around there are many objective c examples to run with as it relates to capturing audio and detecting silence (both of which are needed for above examples and more).
Just wondering if there is a better approach or if I am just missing some built-in functionality that omz has already provided.
In the end, I want to enable listening, process the audio through a callback or something, and - auto stop after # seconds of silence to prevent battery drain and/or resource over utilization.
I imagine the text to speech would be quite difficult to implement assuming I am capturing audio without a finite end, TBH that is just an example and a far fetched idea of voice commands in an app, and not my first goal.
Im off to give it a shot; i'll post whatever I have working or not once I get somewhere.
Any advice would be grand, I've already found some great examples in this forum of most of what i need to get started.
A while back, I mocked up a little audio viusalizer in scene.
Basically it sets up multiple recorders, so that you can get gap free processing. Set dofft=True to see some actual processing.
The real way would be to use audiounits ... I think I played around with these and could never get it to work, but that was also probably 2+ years ago..
There are some google voice apis.
Thank you I will look at that.
TBH trying to avoid API's because for many years I've uses helpers/hacks/api's for everything and I am no better off.
This isn't to release on the app store, it's nothing more than a challenge to hopefully share code.
What you have there is a great start, maybe I will read up on audio units during my work commutes, yay.
Btw, if you are on ios10, you start getting access to speech recognition APIs directly on phone.
Out of curiosity, are you thinking mostly speech recognition? Or some other sort of sound based triggering?
if you do get something working, do post back... I have been wanting to tackle audiounits, or audioengine nodes, or audio queues for a while...
Speech recognition would be for anin-app mini siri concept.
I am on iOS 11; so thats a great tip thank you.
If anything - I am going through start to implement class to let others have access to whatever functionality I get going; as said I am weak in Objective C so this is a really good excuse to learn more.