copy.deepcopy or copy.copy with classes using ui.View
I know I have asked this before in various ways. I hope I am in a better position to understand the answers now. Also to express what I am trying to achieve.
At the end of the post there is some simple code to create a custom ui class and then add it as a subview to a ui.View. Tried to keep it simple as possible.
What I would love to be able to do is create that class once, assign it to a var. thereafter when I want an instance of that class, I would like to be able to copy the memory object to get a new instance.
The code below shows a very simple example and probably does not warrant speeding up. But what I really want to do, is to have the 'Cell' class read multiple .pyui files to composite/render a view. So let's say 5 or 6 or more .pyui files loaded to compose my view and I need this composite view as fast as possible. It makes sense to me, that if I composite the view once, then just copy memory bytes, there would be a huge difference in speed. I can also appreciate there are some file buffering the issues I am unaware of. But still the execution of the python code in ui.py must be still hefty and slow compared to just copying some memory bytes.
In the code below, is the version that works. Just normal. Create a class inheriting from ui.View, then add it as a subview to another view. But lines above commented out try to use copy.copy and copy.deepcopy, niether result in the view being shown.
I have tried to comprehend the copy module the best I can, it seems to me it should be possible. But maybe there is something under Pythonista's hood that makes this impossible.
I would like to try and find out once and for all if am am on a fools quest or not.
Any help really appreciated. Again, sorry I have asked this before. If there was a good answer, I was not able to comprehend it. I am hoping I can now!
import ui import copy class Cell(ui.View): def __init__(self): self.width = 100 self.height = 20 lb = ui.Label(frame = self.frame) lb.text = 'Hello World' self.add_subview(lb) if __name__ == '__main__': cell_template = Cell() f = (0,0,500,400) v = ui.View(frame = f ) #c = copy.deepcopy(cell_template) #c = copy.copy(cell_template) c = cell_template v.add_subview(c) v.present('sheet')
Ok, no answer yet. I hope I get a good one. But in the interim I have thought about another possible approach. Like a pool of connections. I will try and make x objects and buffer them so to speak. Then will re-use them over and over again. Never deleting them. The virtual view I was working on would benefit greatly for this approach I think. I am still working on a virtual view. But it's different in the sense I am trying to create a Facebook style news feed. So variable height rows, with a variety of compound views. LOL, I don't think it will be easy, but the more I try, the ideas are more refined, at least I hope they are:)
Take a look back at omz's original gridcellview that he showed when you started working on the virtual cell class. The idea was that you had a pool of objects already created, and you just call configure when the cell comes into view, to add whatever text is needed, etc. When objects scrlled offscreen, they simply were returned to tht pool. Don't try to copy, because you are dealing with objects managed by objectivec, just copying memory will not do what you want.
@JonB, thanks now you mention it, I can remember it. I unfolded out because I couldn't understand it. I was trying to simplify the code for myself so I could get my head around it. Lol, so full circle I go ;)
I found this code for a very simple objectpool
Is implemented as a singleton. Hmmm, didn't know about singletons. But I did some reading and seems the Python community is split about if they are evil or not. I seen more comments about them being evil. So I will just change it to a conventional class. It appears singletons have there purpose, but I am still crawling along. So will avoid it for the moment. I shouldn't need global access for my purposes. The class seems very simple, I hope I have understood it correctly.
Ok, I made these small mods... I am still testing, but seems ok. I guess I could also implement with thread safe queues also. But as long as the implementation us encapsulated I hope as I learn more I can update it without too much trouble. I know it's trivial, but when you are not doing it all the time, not so trivial.
class ObjectPool(object): ''' Resource manager. Handles checking out and returning resources from clients. ''' def __init__(self, Resource): self.__resources = list() self.Resource = Resource def getResource(self, *args): if len(self.__resources) > 0: print "Using existing resource." return self.__resources.pop(0) else: print "Creating new resource." return self.Resource(*args) def returnResource(self, resource): print 'resource returned' if hasattr(resource, 'reset'): resource.reset() print 'reset method called on resource' self.__resources.append(resource) def addResource(self, resource): self.__resources.append(resource)
btw, you might be interested in this article from a few years back:
Although this is about html5, in some ways it is really about the poor design of the original facebook native app. Many of the same ideas would apply to what you are doing, such as managing when/how you download and render images, vs text, etc.
@JonB , thanks. I did read the article. Interesting. Shame the app they made has since been taken down, would have been fun to try it. I would say for the most part that FB must be doing a pretty good job these days. Love or hate FB, I think the apps pretty amazing.
My python experience is still so lame. I just know enough to complicate the hell out of things without getting a result. I thought It was going to be easy to work with a series of binary files (fixed length) with pointers to into other files. This does not seem to make sense to do in Python as it does in C. So I have moved onto using SQLite. It's simple enough, but want to understand it. So will spend sometime with it. Story of my Life :)
But it would be very interesting to see how FB app/servers work and cache different types of data. You would image the first time an article is ever fully rendered on a device, a bunch of information is calculated and sent back to their servers, and forever more passed to the app, when the same story is requested. I can only assume they cache the articles themselves and are also serving them up from their own servers rather than pointing to the source (the inline articles ). I still have a problem understanding why they need huge server farms around the world. I am probably being naive. Maybe it's just about the volume of requests they are handling on databases rather that data storage
I thought It was going to be easy to work with a series of binary files (fixed length) with pointers to into other files. This does not seem to make sense to do in Python
File formats is something that I love hacking with (see the work on SPLnFTT). If you are serious about this one and you post one of you binary files to a github repo and write up what you already know about the file format and what you are after, I will take look. I have a loooong train ride on Sunday and need a coding challenge.
@ccc , sorry I didn't answer earlier. I have so many problems with Safari. I could not use the forum properly, I didn't understand why. Then today, I Remembered I am using a widget speedafri as an ad blocker, and it has broken in iOS 9.1
Never mind. But with the binary files, I was just talking about making binary files easily as you would in C. They can just be written and read efficiently as well as jumping around in the file using pointers and offsets. I hadn't tried to do that in n Python before. I just assume would be a trivial exercise as it is in C. But of course you need a struct/object of a fixed size for this to work easily and efficiently. I wasn't thinking of a specific format, other than a few roll your indexes pointing to some structs in other files. We always did this in the old days because the size of our data didn't fit any desktop solutions. Intranet solutions were too costly, too hard for our customers to maintain and still too slow. Sure you take a delopment cost hit when you roll your own solution rather than using a commercial db, but it paid off for us. Example, even when CDs where x2 or x4 we still spectacular performance leaving all our indexes on the cd. Ok, i go on :) just to give you an idea behind my thinking
File reading in python is basicslly the same as in c, though there is a little more verbosity to, say, just read a binary integer. Note you need the b mode in your open call. You can use fseek just like you would in c.
You will want to use either
struct.unpackoperating on the bytes returned by read, your you can use file
readintopointing to a ctypes structure, or an
Of course, the python paradigm would be to use pickle or marshall when reading stuff created in and intended to be used by python, or json for more human readable format.