So, I am fascinated by the async stuff, and by Trio, as it seems very thoroughly thought out, and much less prone to random exceptions than asyncio.
So here's the traditional experiment, parallel loading of web pages. I have a list of 56 highest-traffic sites. Straight-forward site-after-site fetch takes around a minute and a half on my network.
To use Trio for fetching web pages on Pythonista, you need to pip install:
trio
asks
contextvars
Also, you need to include the following, to circumvent or silence errors and warnings due to the way Pythonista has set up signal and sys.excepthook handling:
import warnings, signal
with warnings.catch_warnings():
warnings.simplefilter("ignore")
import trio
signal.signal(signal.SIGINT, signal.SIG_DFL)
Reward here is of course that the same list can then be fetched in parallel and in 4-6 seconds, or in 1/20th of the time that the sequential approach takes.
s = asks.Session(connections=100)
async def grabber(site):
r = await s.get('https://'+site, timeout=5)
content = r.text
print(site)
async def main():
async with trio.open_nursery() as n:
for site in sites:
n.start_soon(grabber, site)
Ok, so then I thought about Scripter, the generator-based asynchronous 'language' originally designed to run off the Pythonista UI loop, running UI animations.
I changed Scripter to use Trio as the underlying loop, and to be able to call async def functions as part of the scripts. Here is the Scripter version of the above trio code:
@script
def retrieve_all():
for site in sites:
worker(site)
print(site)
@script
def worker(site):
result = get('https://'+site, timeout=5)
yield
content = result.value.text
I have gotten so used to Scripter that I much prefer the "parallel by default" way it operates. This is of course somewhat in opposition to the "explicit parallelism" philosophy of the asyncio/trio crowd.