Will pandas and scipy be available on Pythonista in the near future ??
ccc last edited by
As you can see, Pandas/SciPy are requested very often.
@jserenson Hi, to perform some calculations with scipy or pandas and Pythonista, you could use a script ('sage_interface.py') that uses a remote Python environment useful for math with Python (SageMathCell). One limit is: you need to be online with your device and if a lot of people are using the server at the same time, your calculation could take a long time even if it is pretty simple (the server is busy). The main code that passes a python string to SageMathCell is by Andrey Novoseltsev et al. and can be found here.
If you are interested, try to read the simple procedure here to use this script with Pythonista (and all Python environment with websocket-client library installed).
A full scipy script in general could be not executed directly in this way, because plot can not be shown easly in Pythonista console. But with some code adjustments you can use all scipy libraries provided by SageMath.
Some interesting resources are:
As a test, if you want, you could try to use the wrench version of 'sage_interface.py' written by JonB (see the external thread previously indicated) or the 'sage_interface.py' function version on the scipy example found at https://www.scipy.org/getting-started.html.
The example code of the scipy Getting Started should be adapted a bit if you want to plot the solution with Pythonista, but if you work mainly with arrays and numbers, the above solution ('sage_interface.py' wrench or function version) works well enough.
Analyser last edited by
This post is deleted!
Phuket2 last edited by
@Analyser, this post seems a little open ended. Eg, i am left wondering is it the price or a maintenance issue, eg. Even if you have offered for free he still may not have been willing to include them due to extra responsibilities he would have to take on.
@Analyser Hi, thank you for the info, your app seems very good to do math with Python, but in my opinion it is a bit expensive, sorry for this observation. Consider that, even if in your Analyser you added scipy and other powerful math libraries, user can't use/install other not-pure python libraries that depends on scipy and/or pandas. It is, in my opinion, the biggest problem with Apple policy about development apps (IDE) in the store: in other words, most of the development apps in the store are "static", that is: no way to increase the power of the app by user (adding external libraries for examples if user needs them) if the app developer decides to not increase the power.
If you tried to allow user to test your app for a short time, maybe he could decide if buy or not your powerful app, but it is my opinion (no problem with you, I know that you must pay Apple to develop and publish your apps: in fact I hate the Apple choice to force user to pay when he wants develop an app (also for hobby) with XCode, that is available only for Mac; I prefer much more an "open" way , that is the use of a full IDE inside the idevice with which I can write scripts for my purposes).
However thanks again for your work on Analyser!
Analyser last edited by Analyser
This post is deleted!
ihf last edited by
Analyser looks quite interesting. Personally, I would not want to give up the (many) superb features of Pythonista. I hope that @omz will (someday) find a way to add these much requested modules. (no drink for this one :-)
Phuket2 last edited by
@Analyser, ok cool. I think my post may have looked a bit mean or cold. Was not meant to be. I think i was rushing. I was just trying to point out as a developer (which i have not been for so many years it does not matter), but on the surface some decisions look like they should be no brainers. But I am sure you are aware when you add any third party dependencies to your app/development you have to ask yourself many questions about the now and the future. I dont pretend to understand what would be involved here. I just received know sometimes the smallest things can have a huge domino effect. And to be fair that could be in a positive way also.
Anyway, for me I have only a single motivation. That is that Pythonista is the best it can be for all its users.
bpunktm last edited by
add pandas and scipy and i will instantly buy the app :D
ericbaranowski last edited by
With pandas not being integrated, I was always interested in using a remote environment of python. Running and editing code with Pythonista is so much more fun than any other method, I just wish I could use it more.
It is possible to use the sagemath servers for doing remote work. @Matteo has some scripts for interacting with sagecell
Matteo last edited by Matteo
@ericbaranowski Hi, I have no more ideas about how to improve sage_interface, but if you have some hints to improve it post here your ideas!
sage_interface has the following features:
- You must be online.
- Max 64 kb of input scripts.
- Max 2 minutes for calculation for any single script (must be tested, I’m not sure, Andrey the author said me that user could have also 2 hours for a full math session, but these limits could change if Andrey will decide to change them).
- About max 4-5 Gb of Ram available (info by Andrey).
- No way to install/compile your personal libraries (not-pure, like last version of scipy for example or any other library you want).
- sage_interface: a)sends to server a script, b)waits for server output, c)processes the output string simply saving each string in a Pythonista variable you decide, with image and numpy array capabilities. a) and b) are available with the original script by sagemathcell authors (see here), while c) is written by me with the help of @JonB and @ccc. I use a lot also the wrench version by JonB because when I want to test a script that in future I will use often, I find very useful to test it by touching a single key in the same way I touch the built-in key run of Pythonista (Pythonista is very customizable if you know how to customize it ;-)). Wrench version has a powerful feature about error checking in the script send to server, so you can easily find any error returned by server. My function version has a very basic error checking feature, but I use it because Pythonista allows you to open several files and to surf easily among them.
For the second limit above, I say that if you must solve a very big linear equations system, for example, you could create a txt file with the big system as a sequence of numbers/coefficents (A and b for the classical Ax=b), save it in your dropbox (or any online storage service), use the server to open remotely your file with data input text (A and b) in order to save content in variables of the server python environment, and finally run your script for linear solver sending the solver script to server and using the variables for input data created previously.
My only target now about sage_interface is to write a series of standard scripts (standard for me, but they can be adapted by user) that send to sagemathcell server any script for calculation in the following numeric math fields (that I use often, for work or for hobby):
- PDE solver
- ODE solver
- Big linear equations system solver (sparse, dense)
- Non linear equations system solver
- Optimization (linear or non linear – w/o contraints fitting data with plots).
Another thing is that with sage_interface you can import to python environment of the server a remote pure python library with httpimport by John Torakis, but it works obviously with pure python libraries, no way for now to use it with c-fortran based python libraries. See here.
I asked to Jonh Torakis if he knows about some python libraries that allow user to import any other library (pure or not pure python) as a full folder with compiled c-fortran code (for example the last scipy library compiled in the same os environment the server uses, for sagemathcell is linux), but he doesn’t know. Httpimport is great for pure-python libraries and you can use it if you want to maintain clean your site-packages folder in Pythonista (but if you use often a library then you would prefer to install it instead of remotely importing it every time you need).
Does someone know if a native (Linux/Windows/Mac) linker-loader using HTTP/S (or networking in general) exist (citing John in his answer to my question)?
Sorry if I insist, but I'm yet interested to the question I posted a few days ago:
does someone know if a native (Linux/Windows/Mac) linker-loader using HTTP/S (or networking in general) exist?
In other words:
- Let's say I have two computers (A and B) with the same OS;
- In computer A I have a full working python distribution with a properly installed scipy version;
- in computer B I have an other full working python distribution (same python version of computer A) but I haven't the scipy library and I can/want not install it;
- I copy/paste the full scipy folder of computer A in a remote service (github, dropbox, icloud, etc...); the full scipy folder contains not only pure-python scripts but also compiled files like .dll or .lib. or any kind of file required by scipy. Let's suppose that the full scipy folder is fully portable (it should, because some times ago I was able to compile a scipy version in my pc, then I copied the scipy folder created in site-packages and I pasted it in a different python distribution in a different pc with same OS: without compilation in the second pc, I was able to use scipy compiled in the first pc);
- I know the HTTPS link of the full scipy folder stored in the remote service (as shared link);
Question: does exist nowadays a way to use the existing python environment of computer B (with Internet connection) to import only in RAM memory of B the full scipy folder with HTTPS link (of point 5) in order to use it with python, as if the full scipy folder was stored in the hard drive of B?
I only ask if any of you are aware of the existence of a tool like this and where I can find it.
Thank you for any hint
ccc last edited by
This post is deleted!
@Matteo For pure Python modules that wouldn't be too difficult, you could write an import hook that tries to download the requested module from a server and then loads that. With native modules that's not as easy - I don't know of any OS that lets you load a native library from RAM. For example, the Unix
dlopenfunction only takes a file path. You could download the library to a local file and load that, but if you can do that, you could install the library permanently as well.
In any case, this would be of no use on iOS anyway. Because of iOS app sandboxing, an app is only allowed to load native libraries if they are code-signed by Apple or the app developer. This is why you cannot for example compile SciPy for iOS on a Mac and copy over the compiled libraries. Even if you code-signed them with your own iOS development certificate, iOS would refuse to let Pythonista load them, because the library was signed by a different developer than the app.
@dgelessus I have been thinking about this recently... Does iOS actually prevent remapping RW memory to RX?
If so, How does CFUNCTYPE callbacks work under the hood?
I have not tried this yet, but it looks interesting..
Of course, even if that works, the tricky bit would be writing one's own loader in python.
In thinking about this more, though not looking at the source, CFUNCTYPE almost certainly works by having an executable stack, or else by having a WX page full of trampoline functions. So it seems like it should be possible to use the same method to call object code in memory.... I can see it maybe working for statically linked code, but a loader would be ugly for something like pandas.
Hi, thanks guys for reply, I would like to clarify that I don’t want to import in Pythonista any precompiled not-pure python libraries (it is impossible for now), but to import them in a computer (named for example “X”) with a working python environment, and I’m interested in doing that with the remote computer provided by SageCellServer, that is a virtual machine with Linux OS; if user can create his/her own server with a home computer linked to Internet, it is possible to use it instead of the SageMathCell one, solving the limits related to impossibility to install own libraries.
Sorry but I try to explain what I have in mind with other words as below.
I’d like to use Pythonista with my idevice:
to send some scripts (like the library “httpimport” by John Torakis, but it works only for pure-python libraries without any dll, lib, etc… files), to the remote server (computer “X”) (these scripts are the main reason of my question: Do they exist? Has someone created them, what do you know?);
after the remote server has received these scripts, it should run them in order to import only in the RAM (not on hard disk) of computer “X” a full not-pure python library properly compiled with other computer with suitable compilers, same OS, etc… of the remote one (“X”); let’s suppose this library I want to import only in RAM of computer “X” is stored in a folder of my dropbox and I have its shared link.
after the importing, I’d like to use the remote python interpreter of SageMathCell (or other own available remote computer) using Pythonista like a tool that sends python code to server and receives output from that server.
I know that a OS (windows, linux, mac, other…) needs to have an hard disk in order to work (is it true for all? I’m not sure). I think that nowadays no tools exist that allow user to decide if to install any external software, python library, fully portable executable program, full IDE with any kind of compiler
in the hard disk, or
directly in RAM, as if a portion of RAM, decided by OS, behaved the same way as a hard disk (so all files are stored in ram with the same folder hierarchy of hard disk).
As example, let’s suppose I have in my pc (named “Z”) a python environment installed in the hard disk in the path “C:\Python27\” of windows OS (as example, but I ask if it is possible also for linux, mac, other…).
Now suppose that
- I want to import a python library in my pc (without store it in hard disk with “pip install”). With httpimport library by Torakis it is possible to import temporarily only in ram a pure-python library, but, unfortunately, suppose I want to import in RAM a not-pure python library, like scipy, so I can’t use httpimport lib.
- Suppose also that the library I want to import only in RAM needs some existing libraries in my OS, that are stored in some folders of hard disk.
Well, can I:
a) compile the not-pure library with a suitable compiler following the common way, that is by installing the compiler in hard disk of a pc and using it to compile the python library,
b) copy the compiled python library, named “Y” and stored in a cloud service accessible via Internet connection, only in a portion of my pc’s ram (as if it was in “C:\Python27\Lib\site-packages\my-compiled-python-lib”) with a python command/function like the one I use to download something , that is the "urllib" python lib,
c) tell to python interpreter of pc “Z” to import the library “Y” in order to use it?
The library “Y”, that needs some external libraries in my OS (see point 2. above), in my idea, would search for the external libraries stored in pc "Z" hard disk even if it is only in RAM.
So, is it possible?
Thank you guys
@JonB I don't know how
ctypescreates callback functions in Pythonista.
libffi, for which you can find the source at https://github.com/libffi/libffi. However I don't know if @omz had to modify
libffito make it work on iOS, so the official source code may be inaccurate.
What I did find however are the internals of the Objective-C runtime's
imp_implementationWithBlock, which effectively lets you create an
IMPthat calls the given block. See
a1a2-blocktramps-arm64.sin https://opensource.apple.com/source/objc4/objc4-723/runtime/. The way it works is that the assembly code defines a page full of identical trampolines. The trampoline code invokes a block located exactly one page before the entry point of the trampoline. That way all trampolines use the same code, but end up invoking different blocks.
Now to make use of these trampolines,
imp_implementationWithBlockallocates two pages of read-write memory. Using the Mach
vm_remapfunction, the page of trampolines from the assembly code is mapped into the second newly allocated page, which also changes the permissions of that page to read-execute. The first page is still read-write and is used to store the blocks, which can then be called by the corresponding trampoline in the next page.
None of this requires an executable stack or any other sort of write-execute memory, since the executable code is loaded unmodified from the library, and then mapped into memory multiple times without modification. I don't think iOS allows write-execute memory, that would make it very easy to circumvent the code signing restrictions.