Binary files read and write
structapproach generate the same list?
What is printed if you add
print(floats_in_the_file)when you run the script against a 5529600 byte file? I would expect
You could try removing bogus values by post-processing the list with:
my_list = [(x, x) for x in my_list if x > 0 and x > 0] # remove invalid elements
You are right, the maximum number of data points in the binary file is 1382400. I'm new in Phytonista and I'll have to read how to detect and remove NaN an Infinite values in Phyton and what are the available Array functions. I've an Academic Apple Developer License and I'm exploring all the available options to process the noise data within the IOS environment with an Universal standalone App. As far as I know, Phytonista seems to be the only one to import SPL data with a script from the Dropbox to its sandbox, overriding the cumbersome iTunes File Sharing. The project is part of an epidemiological investigation on Environmental Noise and Health which includes, among other challenges, the simultaneous recording of an ECG.
Thanks for your valuable help.
OK... In just over 1 second
SPLnFFT_Reader.pyreads 1,382,400 floats out of the binary file, converts that into a 2d list of 691,200
fast_slowpairs and cleans that down to a 2d list of 2,786 valid
fast_slowpairs and prints out the first 50 pairs.
My cleansing step might not be right for your purposes. You can use
math.isinf()to find those values but I do not believe that it is required anymore because the author of the SPLnFFT app told me in an email that "In the matlab [example] script there is some processing to get rid of NaN data. But I thought I had solved this in latest release of SPLnFFT".
numpy gurus... Why does this not work as expected?
import numpy data = numpy.fromfile('SPLnFFT_2015_07_21.bin', dtype=float) print(len(data)) # 691200 :-( this is half of the expected number
omz last edited by
You can try this:
data = numpy.fromfile('SPLnFFT_2015_07_21.bin', dtype=numpy.dtype('f4'))
floatdata type is usually implemented as a
double(8 bytes), so this specifies the number of bytes explicitly.
Phuket2 last edited by
Do issues still exist with byte order on different platforms? I really don't know. A long time ago, we used to have to consider this. Big and little Indien when reading binary/memory files without an API that took care of the translation
Yes. Complexity is preserved but it is better hidden. The fortunate thing here is that the file in question was written out by one iOS app (SPLnFFT) and read in by another iOS app (Pythonista) so byte order is not an issue.
Phuket2 last edited by
@ccc. Ok, understand. Honestly, was not even sure these issues still existed. Regardless, normally they have no impact as long as you are calling API calls, it's when we decide to get tricky and implement our functions/ methods for reading so called cross platform files. But in this environment, I think it's food for thought. But as you say in the case, both files written from iOS so not a problem
Now I understand why numpy is all the rage with data scientists!!!
3 lines of numpy do the whole thing!! Import, read, transform, and cleanse. Much faster execution time too.
import numpy data = numpy.fromfile('SPLnFFT_2015_07_21.bin', dtype=numpy.float32).reshape(-1, 2) data = data[numpy.all(data > 0, axis=1)] # cleanse print(type(data), len(data)) # numpy.ndarray, 2786 print(data[:20]) # print first 20 fast, slow pairs
Hi CCC. I tried your script with an edited version of a SPLnFFT binary files before the iOS two last updates. Last night I made some random noise mesurements. For my surprise the exported files had many chunks of zeroes alternating with random chunks of normal SPL values. That is not a mormal behavior. No NaN or Infinite values were detected this time. If you give me a mail address I can send you the link to some test files in my Dropbox account. The struct approach and the array approach render the same results. You JUST gave another present to SPLnFFT users with your SPLnFFT_Reader.py. I'll download and try it right away. Best Regards
numpy versioninstead. It is simpler, faster, and easier to mess around with. If you have a computer with an iPython notebook, that would be a great environment for exploring the dataset.
To send a Dropbox file, you can check it directly into the Github repo above via a pull request or you can go into your Dropbox client and tap once on the file to select it and then tap the share icon (a box with an arrow pointing up out of it) and share as email. Cut the URL out of that draft email, and paste it into a comment on the repo or here.
does the splnfft guy have matlab scripts that read and plot the data? the screenshots show such an .m file. if younhave a copy of that, it would explain how to parse and interpret the data.
Yes, it has a Matlab script that you can use with Octave, with no changes. There is also available an Excel Macro that allows you to process the whole file with one hour chunks. What is for me an attractive feature of Phytonista is the possibility to importe the SPLnFFT bin files or any other file type directly from the Dropbox to the sandbox. You don't need a desktop computer and overrides the cumbersome process of iTunes file Sharing. The SPLnFFT is linked to another App of the same author: SPLnWATCH, that can record in the background, an excellent battery and screen saver option.
can you post a link to the matlab script?
Is Octave this app http://octilab.com ? How did you get the .bin file into that app?
You can get it at the SPLnFFT Noise Neter Developer Web Page. He currently uses a Face book account. If he sent you an email, I think you make ask him a copy and he will happy to send it t you. You have to use it in a desktop computer because the online iOS Apss Octalib and Octave pro don't have File I/O support. I now nothing about copyright, but as a user I have a copy stored in my Dropbox account. It's in fact a Matlap script but works in Octave. Of the scripts available, I just used the Excel Macro. You need Microsoft Office 10 or above. I had it installed in a PC with windows XP Pro but they stopped the OS support some months ago. With the excellent scripts you supplied and my IPad Air 2, I don't need it at all to import and process the binar Data. I have also an iOS Basic interpreter with a powerful graphic class that has an option to compile the source code wit XCode. I'm still struggling with the Python code to plot the imported data with your Phytonista scripts. By the way, can Phytonista scripts be compiled with Apple's Xcode?. I use it with a Mac Mini. I'll do anything needed to avoid the iTunes file sharing in the standalone iOS App I'm developing for my Noise project.
There is an XCode template that allows you to compile your Pythonista scripts into standalone iOS apps that you can put into the Apple AppStore.
See the changes made to SPLnFFT_Reader_numpy.py. I added a matplotlib scatter chart of the data to show you the graphics capabilities of Pythonista. I could really use the help of someone who knows matplotlib to make the graphic more relevant to this dataset (x=fastFFTs, y=slowFFTs).
ManuelU last edited by ccc
Thanks CCC. The only thing needed is an X_time vector, depending on the total number of data points. This plot show the correlation between SLOW and FAST values.
In the Matlab script for a 24 hour record is created as:
count=24 * 3600 *8 *2;
The xCode Template sounds interesting. Unfortunately I have an Academic License and you need to register all devices where the App will be used. I only have an IPad 3 and IPad Air 2, and Xcode only allows 64 bits devices, from iPad 3 and above. This issue could be solved by buying a commercial license, but my intentions are only academic.
in the matlab he also filters Inf values, and plots them in a third color with a value of 1 plus the max value in the dataset.
My sense from my emails from the author of SPLnFFT is that the newer files have NO Infs and NO NaNs. I do not find either in the files that I generate with SPLnFFT. I will verify this with the author.
If you know how to create a matplotlib plot that looks like what Matlab generates, I would be happy to accept the pull request ;-).