My only other question is, how do I know when I'm pulling files from the server to my device and when I'm pushing files from my device to the server?
Gerzer last edited by Gerzer
If the command is
RETR, then the script is retrieving a file from the server.
STORmeans the script is storing a file on the server. Also, I was incorrect about the existence of a "walk"-type command in the FTP protocol, so I updated my previous post.
Webmaster4o last edited by Webmaster4o
Ok. I'm working on a github-based alternative, because that will have built-in functionality to view older versions of code. I'm thinking something like a multi-device version of Time Machine on a Mac.
@Gerzer My idea for an improvement to FTP sync is that you could compress all files first (as in @omz's
pythonista-backup.py) before uploading to the FTP server. This would prevent having to upload files one by one, and also reduce file size on the server. Then the script could simply retrieve and decompress files on the other end.
Gerzer last edited by
That sounds interesting. How much value is there for editing on the FTP server directly? If that is an important feature, then I could create a desktop app to access the files. Perhaps that's not necessary, however. What do you think?
ccc last edited by ccc
It might be cool @Webmaster4o to also have a standalone script that walks the Pythonista directory structure creating a tarball (.gz, .zip, whatever) as it goes. This single file archive could then be transferred via FTP, Webdav, Dropbox, email, etc. I know that the Pythonista zipfile module has some upper limit on filesize.
It might also be cool to have a script that shows the 10 largest files in the Pythonista directory structure.
@omz's pythonista backup script creates a zip file of the entire structure with
shutil.make_archive. He stores this in a temp file (otherwise it grows in size forever and never finishes) and then moves it to the main directory. I'm actively working on an FTP sync that compresses first, I've just written a file browser with UI to browse the contents of a zip file without extracting first. The service will upload the entire zip, then allow the user to choose individually which files to extract from the archive. I'm also hoping to have a system by which multiple (5?) backups are kept on the server and you can choose from when you want to restore a file.
MartinPacker last edited by
I think the limit for a zip file is 2GB (though there might be a version that allows for more) @ccc.
Compressing my entire library yeilds about 70MB… I don't think that in compressed form 2GB is realistic
medufacaxperiae5 last edited by
This post is deleted!