@mikael That would be cool! I can stick this into a proper repo if you want ...
Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Posts made by paultopia
-
RE: Sharing a script: solve arbitrary blocks of text as math.
-
Sharing a script: solve arbitrary blocks of text as math.
A common task I have: I want to write down some numbers or a simple formula of some kind, and then go ahead and do something with the numbers or solve a simple equation with them. Examples: writing down how much I paid for each bill in a note or drafts, then summing them; working out a simple algebra equation for something like a mildly complicated tax/tip situation and then solving it.
There are lots of calculators in the built-in Pythonista examples, but none which quite met my needs for dealing with pre-written text. There are also special-purpose apps for things like ticker-tape calculators, but I don't think any of them do algebra, and, besides, why pay for a separate app when pythonista is right there plus has the full power of sympy available?
So I wrote my own little script. Takes share sheet or clipboard input string. If it's just a list of numbers, sums them. If there are arithmetic operators, evaluates the expression. If it's a simple one-variable expression, assumes it's an equation with the other side zero and feeds it to sympy to solve. If it's a one-variable equation, back to sympy. Should handle most basic day-to-day ticker-tape math needs, in only 75-ish lines of actual code.
https://gist.github.com/paultopia/16e69f9f72ad0a5000fa5b4575ddeee2
-
appex.get_url only works when url is on clipboard
From the safari share sheet, the following snippet of code
url = appex.get_url() print(url) if url is None: url = clipboard.get() print(url)
if I just run it straight from a web page in mobile safari, it prints None and None
But if I first copy the url to the clipboard, it prints the URL both times.
Is there some weird multithreading thing happening here? Can't figure out why else this would be happening.
-
Inconsistent behavior when sharing photos into pythonista?
I've been doing some experiments with sharing image files into pythonista from the built-in photos app, to do things like zip up and share elsewhere. But I seem to get inconsistent data.
Here's my very simple experimental code:
import appex def test_save_images(): images = appex.get_images_data() for idx, image in enumerate(images): filename = "zfile-" + str(idx) + ".jpg" with open(filename, "wb") as ot: ot.write(image) if __name__ == '__main__': test_save_images()
Then I go into the built-in photos app, select two of photos, and share them into pythonista via "run pythonista script."
So far, I've done this twice, and neither time has it worked properly.
The first time, instead of saving both photos into the pythonista filesystem, it saved one photo, twice.
The second time, it saved both photos, but it saved a total of three photos---it saved one photo once, and one photo twice.
I obviously find this a little confusing. Why is
get_images_data
not passing the number of photos that I would expect into the script? -
RE: Twitter module with Pythonista 3 on iOS 11
Dear kind leader @omz --- now that this has happened, is there any chance you could bundle tweepy with the next version of Pythonista?
-
Simple weight tracker
Hey y'all. I've banged together a simple Dropbox-syncing weight tracker with moving averages (like the old "Hacker's Diet"). The idea is to just stick it in your phone in the long-press menu and then it should be minimal friction to track weight.
https://gist.github.com/paultopia/95a4c659f2d1971e2e08711d807065d2
(Though come to think of it, I haven't tested that the dialog menu works right on long-press, or that it sizes right on the phone screen, I just banged it out and tested on the ipad... but I imagine it should work fine on phone)
-
Tweetstorm script
Simple tweetstorm poster (break up long posts into numbered tweets), making use of the lovely api wrapper and ui wrapper omz built into the app. https://gist.github.com/paultopia/13263b605862121e11a737155b2c7779
-
twitter jerk remover lazyscript
Super-annoying behavior on Twitter: the sort of person who follows you to get you to follow them back and inflate their follower count/promote their junk, then silently unfollows you.
Problem: twitter terms of service forbid automated unfollowing, while it would be pretty easy to do that anyway, why draw an account ban unnecessarily?
Solution: find people whom you follow but who don't follow you back, open their twitter pages in a browser tab and decide whether to keep or get rid of them individually, but in a loop. Complies with the letter if not the spirit of Twitter terms.
Could use a real UI or at least a button or something, but I haven't yet learned how to use the UI module, so.
https://gist.github.com/paultopia/236bfe61782cd7e5ad7f0a4f00edd202
-
RE: Sync files with Dropbox
If you just want to grab a script from your Dropbox, here's a simple 8-liner that does it using the share sheet. Put this in your pythonista share sheet extensions, then from the Dropbox app, run the script, and the script will land in your pythonista file system with no fuss or muss.
https://gist.github.com/paultopia/23703b934c442a54808e245d9418545a
-
RE: Dropbox in script executed as a sharing extension
Why is this, anyway? Can this be a feature request, to fix this so that just like pythonista can access its own filesystem from the share sheet, it can also access its own keychain from the share sheet?
It's very weird to create a passwords.json (!!) to get around this...
-
RE: Quick hackish html-book documentation scraper for offline reading
FYI, I've tossed up a quick python3 compatible version in a gist for the latest version of pythonista. (Next steps, a proper repo with a version that can handle 2 or 3, plus hopefully/maybe/one day a way to grab images and include in resulting html.)
https://gist.github.com/paultopia/39cb21e080b4abe24de8056e92a40ed2
-
RE: Quick hackish html-book documentation scraper for offline reading
Absolutely @ccc -- here's a full-fledged repo: https://github.com/paultopia/spideyscrape
PRs welcome!
Also, I've refactored a little to make the code a bit more modular, and also to produce technically valid html.
-
RE: Quick hackish html-book documentation scraper for offline reading
Improved! There's a new and much more effective version of the script that:
- Confirms links come from same domain
- Better handles URLS relative to root rather than to ToC folder.
Gist: https://gist.github.com/paultopia/02ca124a111a70faf174
-
RE: Quick hackish html-book documentation scraper for offline reading
Heh yeah I'm about to add a little validation just for sanity-preserving purposes.
I also just updated so it can handle ToC pages other than index.html or equivalent.
-
Quick hackish html-book documentation scraper for offline reading
You know what's annoying? When people post stuff online as html-books, like with a table of contents page and then a bunch of linked sub-pages with all the content. Lots of documentation, in particular, is organized that way (example: http://tldp.org/LDP/abs/html/ ) and it drives me nuts because I like to read stuff offline, like on my iPad on airplanes.
Solution: a script to crawl such pages starting from the ToC and scrape unique URLs linked therein, one level deep, and append them to one big file that can then be read offline.
Gist: https://gist.github.com/paultopia/460acfda07f9ca7314e5
Takes URL of ToC page from raw_input, deposits a html file in pythonista's internal file system. From there, do with it what you will---pass it to a Dropbox upload script, pass it to docverter or something to make it into a PDF, whev. You can also use pythonista export function to get it into Dropbox or another app (like PDF converter) the easy way, but, for some odd reason, the export function only works when the filename ends with .py (why is this anyway?), so you'll have to edit the filename then edit back.
Caveats: assumes all links are relative urls on same server, and has exactly zero validation to check that (easy to add, I just haven't bothered). Will probably crash if that assumption is violated. Also produces invalid html but not in any way that will bother any browser or converter. Finally, assumes that documents to scrape are structured with content in vanilla html, no Ajax calls or the like.