Playing dynamically created wave audio files.
I am running the Pythonista 3 beta.
I am using the "wave" module to create a dynamically create a wave audio file. The file has the file extension ".wav".
The code creates a wave file, plays it, and then deletes the file. The delete code loops until the file is deleted to prevent any issues if the file is locked while playing.
If I play the audio with the "play_effect" method shown below, the first wave file is played completely, and deleted. However, when the code generates a different audio file after that with the same name as the first audio file, when "play_effect" is called, I hear the first audio that played before play again.
from sound import play_effect play_effect(audio_file_name)
All subsequent calls to the "play_effect" method play the same audio. The only way I can change the sound when playing a new wave file with the same name is to exit Pythonista and run it again.
Using the code below to try to play the file, I get silence.
import sound player = sound.Player(audio_file_name) player.play()
Is there a way to dynamically generate audio and play it and reuse the same audio file name and get different audio? I could keep changing the audio file name, but if the sounds are being cached somewhere, that would soon fill up memory.
sound.Player does not cache effects. See an example of real time audio generated in numpy + wave.
Thank you for the response. Those examples are similar to what I wrote. I generate audio data into a bytearray, I call a similar method to create a wave file, and then I play the data.
Also, I left out a potentially important detail in my last message.
My iPhone 6s is running iOS 11.02. (I did mention I was running the Pythonista 3 beta).
The issue is that the following code doesn't work; it produces silence, and this is the only way to play wave files that I know of that doesn't cache the data.
If I switch use "play_effect" I hear the audio that was generated, but play_effect caches the data.
import sound player = sound.Player(audio_file_name) player.play()
The audio data that is generated changes over time depending on other input, so I need to be able to play the audio without caching.
The theramin example is pretty cool. I might download that and play with it.
Did you try theraminsim? (note i just pushed a change that fixed a crash ). There are some tricks to getting a wav to play with Player:
A few issues you might notice:
- iirc compressed wav is not supported, and i think only certain sample frequencies are supported.
wf=wave.open(file,'w') wf.setparams((1, 1, self.fs, 0, 'NONE', 'not compressed')) wf.writeframes(tone) wf.close()
- be sure to close the file, and dont write to it while Player is playing. Writing to the file cancels playback, iirc. Hence the pingpong file scheme in theraminsim-- while file 1 is playing, file 2 gets written to, then it swaps.
I probably should try porting theraminsim over to scene -- since scene supports very precise timing, it might be possible to have a more continuous tone.
Thank you for the help.
The theraminsim program plays audio on my device.
I am closing the wave file before playing it, and not writing to the file while playing it. I got it to play the first wave file using a sound.Player instance, however, I still have some bugs. At least I got the sound.play method to play.
I did not create a separate sound thread. That might be the issue. I expect the "play_effect" method blocks until the sound has played, and sound.play does not. (I'll know shortly). I hope I am all set from here on, and this is not a beta issue and/or an ios 11.02 issue.
If I still have any issues playing audio, I'll post back, and try to create a simple program to show the issues. My current program has multiple modules and even the main program is much larger than the theramin program. Most of the code is not for playing audio.
Update: I got it to work. I think one code path was prematurely deleting the wave file while playing it. (I guess that's kind of like writing to it!).
Thanks for the help.
By the way, I'm using a sample rate of 11,025 Hz, and 16-bit (little-endian) samples. I saw the theraminsim program used 8-bit samples, and I almost rewrote my audio generator because of that. I'm glad I found the real bug first!
Thank you again.
P.s. I found another post you made about the:
Thanks for that too!
Glad that worked out. If your program is shareable, I hope you consider sharing it via git or github. It is always useful to see what others are doing pushing the envelope of pythonista.
JonB - This code defines the MorseCodeSender class, which can be used to send Morse code
This will also run as a standalone program to either send the default text "Lorum Ipsum", or to send text passed on the command line.
This could be plugged into an amateur radio SSB transmitter and used to send Morse code over the air. Morse code has the advantage that it get through with low transmit power when there is a lot of noise on the frequency and speech will be unintelligible. This is because it can be filtered to a very narrow bandwidth to minimize noise, and because our brains are good at picking out a tone in noise.
There are technical requirements on the energy that falls outside of the narrow transmit bandwidth, and 8-bit samples won't meet that requirement. With 16-bit samples, side-channel noise can theoretically be around 90 dB down, which far exceeds what's required legally.
A look at the "send" method and the private "run" method, both defined towards the bottom of the class definition, will make the high-level design clear.
I uploaded a newer version that cleans up some cosmetic issues. The earlier version also had some unnecessary code, but it did work.
I uploaded a new version of the code.
The "stop" method was a kludge before, and had some race conditions with the "run" method. While these never happened, they could happen if the timing was just right. Now the method is fully synchronized. All audio control, including stopping audio, is now handled in the "run" thread and the "stop" method became much simpler.
Update: 11/10/2017 - I uploaded again. I did numerous numerical optimizations to make the code more efficient. Most significantly, the rise-fall envelope samples are now cached in a list so they don't have to be recomputed for every tone pulse. I did some polishing of names, methods, and comments. The code is now a bit shorter.
Update: 11/11/2017 - I uploaded again. Now the tone generator uses raised cosine pulse-shaping.
Update: 11/15/2017 - I updated the code yet again yesterday. I realized that the dot and dash tone pulses could be cached for a specific sending speed and a specific tone frequency, and then just used over and over again, as opposed to synthesizing the tone pulses repeatedly. I also synthesized some fixed-interval silences, which depend only on the sending speed. This greatly reduces the loading on the processor, at the expense of using some memory.