Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
new file/project couldn't be created
-
@pavlinb you said that memory was 150MB after a test, and this test was not trying to write to memory?
I don't understand the difference between two tests of mlmodel, one writing and the other one no -
Memory was about 150MB when test finish without crash - I checked the memory regularly.
When Pythonista crashed, I checked the memory again - it was 3GB.
I tried to OCR an image, in wich algo recognizes lot of characters. May this is the main reason.
Do you want a copy of that image to try?
Now I can't edit scripts and can't create new ones in Pythonista.
Some scripts still work.
But those that uses mlmodels crashes when script tries to load the model.
-
@pavlinb I don't want to take the risk to need to reinstall, sorry.
If you remove Pythonista app from active apps list, and wait some time, does the memory vary? -
-
@pavlinb I think it is now time to ask help from our big guru's...
-
I wonder can you check the pythonista temp directory?
Are you using a local file when you load your model? Or an internet url?
-
Also... What is your model_url? We need to check that is actually valid.
-
@JonB said:
I wonder can you check the pythonista temp directory?
Sure, what do you need?
Are you using a local file when you load your model? Or an internet url?
I use local files.
-
I mean check it for excessively large temp files.
What is your exact model_url?
Can you get a small model to work?
Have you already pre-compiled the model?
-
https://alexsosn.github.io/ml/2017/06/09/Core-ML-will-not-Work-for-Your-App.html
This mentions that complex models can turn into several GB on the device. You might also try backing up after, and looking at what files get created and where. Possibly, I'm thinking if you put your file into a folder starting with a period, that keeps pythonista from showing /indexing it.
-
If he uses the mlmodel I found, it is here
but I also used it without any problems, with images of short texts
-
Yes, I used mentioned model.
And with clean images with few symbols all is ok.
Probably the problem occurs with complex images.
-
@JonB said:
Have you already pre-compiled the model?
The omz code compiles the model before using it.
-
For each character, the compiled model is generated as
file:///private/var/mobile/Containers/Data/Application/C285FD04-6489-45E5-A6C5-D4A44D300BBC/tmp/(A%20Document%20Being%20Saved%20By%20Pythonista3%20643)/OCR.mlmodelc/Omz code needs to be improved so this compilation is done only once, not for each character...
-
It might be a good idea to compile it once, copy it to a regular location, then delete the tmp file. That's what Apple recommends
If you run the code that does a fresh compile every time, you will quickly fill up the tmp storage. I'm not how reliable the cleaning of tmp files is.
-
Agree. .
-
Set vn_model global,
call load_model at begin of main and
remove load_model in _classify_image_dataScript is quicker and does not generate a compiled model at each character
-
After run, tmp folder is empty
-
def load_model(): global vn_model def _classify_img_data(img_data): global vn_model '''The main image classification method, used by `classify_image` (for camera images) and `classify_asset` (for photo library assets).''' #vn_model = load_model() def main(): global vn_model vn_model = load_model()
-
After some tests with small texts, Pythonista storage has increased of 1GB but I hope this will decrease in some hours...