slow autocomplete for complex objects
since we are getting some console autocomplete improvements, might be worth asking again...
Certain objects, namely
requestsresponses take a LONG time to autocomplete, and autocomplete seems to have to start from scratch each character
>>> import requests >>> r=requests.get('https://google.com')
r.takes like three seconds before another character can be typed. This is also bad in obj
Ideally, the autocomplete would be in a cancellable theead such that typing on the keyboard is responsive.
This may be related to another bug in recent versions of the beta: starting with
with cursor at the end of the line, and backspacing two characters to delete the () results in a pause, then deletion of three characters. Basically, deleting a left parenthesis often results in two characters being removed. This happens often on complex objects like requests and objcinstance objects, but perhaps not on simpler ones.
I have that on my radar.
From @omz on Twitter:
Yeah, I saw that. It doesn't bother me that much on a fast device tbh, but I definitely see the problem. Python 3 is generally a bit slower than Python 2 (not that much though, so I'm not sure if it's actually noticeable there).
Yeah, there's something about requests specifically that makes code completion slow, not completely sure what it is, but it's the same for Python 2, I think.
I have the suspicion that it's doing something funky in
__dir__()or something like that. The way the console's code completion works, there is a chance for it to trigger arbitrary code execution (which makes your/JonB's point all the more valid of course)
Just to provide users some more information and context on the subject.
The next build should actually improve the situation quite a bit (while not completely eliminating the problem). It may still take a few seconds for some objects until completions show up, but there shouldn't be a delay anymore when you continue to type or delete characters.
It'll basically just use a cache for this because the completions for
r.iter_...can simply be determined by filtering the previously-fetched completions for
r.. The cache obviously has to be cleared after each statement, so you may still see a delay when starting to type the next line, but it shouldn't be nearly as bad as it is now.
@omz In case you haven't figured it out already - this seems to be part of the issue:
Responseobjects have a property
apparrent_encoding, which internally calls
chardet.detectto guess which encoding the response data uses. Getting that property probably takes a while. (I assume Pythonista reads every attribute once to check whether it's callable, so it can add an opening paren at the end.)
Maybe the "callable check" could be done only for attributes present in the instance's
__dict__? I think those are guaranteed to be a normal dictionary lookup unless a custom
__getattribute__gets in the way.
@dgelessus That's interesting, I wasn't sure which attribute it was, but I suspected something similar. I think one problem with your proposed solution is that it wouldn't work for
objc_util, where all the ObjC methods are callable, but not present in
__dict__thing is of course not perfect, as you said it won't detect "non-standard" attributes as callable even if they are. And it just occured to me that some objects may not have a
__dict__(instances of many built-in classes or user classes with
__slots__) so that's another issue.
Another option might be to have an optional user settable autocompletion delay when doing the long parsing of dir. caching will be nice though, i think my main complaint is the per character reparsing.
@JonB I don't think a delay would help very much if the completion itself isn't done asynchronously because it would just block a little later... It would certainly be better if the completion was actually asynchronous, but it's a little tricky to get this right... I'm actually uploading a new build right now (mostly because of the iOS 8 crashes), so you can see how much the cache helps in a couple of hours.
This version is definately an improvement. Objc autocomplete is quick, and i love the fuzzy suggestions for objc. I wonder if fuzzy suggestions for attributes starting with underscores should be ordered at the end though?
requests objects are still a bugaboo, since the first time still takes about 3-4 seconds on my ancient device.. strange though, dir runs in less than a millisec (and detect only runs once) so i don't get what is happening (settrace doesn't show any other code being run, though I think you've got jedi running in a separate interpreter now?). perhaps it is a recursion thing, making static analysis in jedi difficult. Gittle objects are also painfully slow, these are full of recursive and functional programming decorators galore.
@JonB The delay the first time is normal, it has to check whether every attribute is callable once, and then it caches that. There's no way around the first check without workarounds like I discussed above.
Also the interactive console's autocomplete doesn't use jedi, it does normal object inspection with
@dgelessus Well, it could be sped up if there is a recursion bug that could be fixed. If not, it could still be run asynchronously
i see... adding those ()'s requires actually getting the attribute (which technically could have side effects in a class with a very poor design)
Not sure how robust this is, but perhaps it would be worth checking what is in type(r) first? For dynamic properties which return callables, this wouldn't append the ()'s... but it avoids most side effects.
def getattribstrings(o): A= itstype=type(o) for a in dir(o): if hasattr(itstype,a): if callable(getattr(itstype,a)): A.append(a+'()') else: A.append(a) else: if callable(getattr(o,a)): A.append(a+'()') else: A.append(a) return A
For a Response object this runs in under 1msec and produces identical results to simply the code in the else.