Can you include a snippet of your code, and the way you attempt to view that path? Do you use
os.path.expanduserfor example? Based on the image I would say you're trying to call a path relative to a directory, but when running from the shortcut it might not know how to find it. It's safer to call it relative from the user directory, then use
os.path.expanduserto solve the path.
It's still a WIP, and I can't guarantee the interface won't change. Consider it alpha at this point. Segfaults with "logical" operations have been fixed (Do not attempt to access a class from
ObjCClass, because it will segfault; and on that same note, do not attempt to write with the
objc_utilsmodule when tired), but any testing beyond that has still to be done. Documentation is incomplete too. only
AVSpeechSynthesisVoiceis pretty complete at this point, but I'm considering a couple method name changes in a more pythonic style. Who knows... Either way have fun in case you want to try it: https://gist.github.com/boisei0/08e6f9f619e8c045ba08dae100e63b17 Dump all those files in a folder named
av_speechin your site-packages, then import it like the sample above.
I am currently working on a high level Python interface to the AVSpeechUtterance/AVSpeechSynthesizer/AVSpeechSynthesisVoice triad. To do so, I'm creating Python classes that bridge the access to the Objective-c classes, following the Apple Documentation. The low level classes are just about directly bridging the objc classes to Python, with a method on each of them to convert the python instance back to an
ObjCInstance. On top of that, a high level interface will be created that functions similar to
speech.say, but running directly on
objc_util. Means that to use it you can follow the top level interface, without worrying about the objective-c classes in the backend.
I'm currently halfway through the low level Python classes, hoping to release to PyPI (Python 3 only though) later this week.
@cvp thanks for the research you've already did on this, it's on that snippet I've been able to figure the rest out :)
Edit: Quick prototype for the high level interface works nicely together, here's a sample of how the code above would look like (though it will get a couple naming upgrades and more configuration options):
import av_speech speech = av_speech.AVSpeech() voices = av_speech.AVSpeechSynthesisVoice.get_speech_voices() for voice in voices: print(voice) speech.set_voice(av_speech.AVSpeechSynthesisVoice.voice_with_identifier('com.apple.ttsbundle.siri_male_en-AU_compact')) speech.say('Hello, I'm the voice of the Australian male of Siri')
Alternative for the set_voice line is
speech.set_voice(av_speech.AVSpeechSynthesisVoice.voice_with_language('en-AU', av_speech.AVSpeechSynthesisVoiceGender.MALE)), so it is easy to configure.