Sun Jan 05, 2020 2:34 pm
So, given these clever utilities, what's the best backup workflow for me? I wonder if I'm currently doing it the worst possible way...
I have a dozen Raspberry Pis, of many kinds - 256MB 1B, 2, 3, 3a+, 4, two zeros, five zero Ws. These have a mix of storage - mostly 16GB cards, a couple 32GB, while the 3 USB boots from a 500GB hard drive and the 4 boots from a 32GB card with root on a 120GB SSD (courtesy of RonR's other scripts).(NOTE: I didn't try the 3 and 4 at this time - they have too much data.)
Doing full backups obviously took quite a while, but incremental backups (given rsync) were super fast. I told the backup script to create the data at /mnt/pixxx.img. Maybe the intent was an external drive - but I just tried creating them locally. I'm used to compressed backups, so I then gzipped the images locally, and then scp'd them to my mac, where they will get backed up, along with everything else there, via Time Machine. So at least they're safe.
But this was all very slow - initial creation, gziping, transfer. I tried NOT gziping one of them, to avoid the gziping time, but the transfer therefore took even longer, and I wasn't scientifically recording the times to tell which case was longer overall. Both were annoying. The gziping is CPU and IO intensive, but so is the transfer at least with scp. Maybe straight ftp would be less intensive? The gziping (via the "progress" utility) was often slower than a Megabyte a second, though it spiked up to 10MB/sec at the end. The transfers via scp (as shown by the utility's own progress report) were mostly about 2MB/sec, as low as 1, rarely as high as 4. I ran several at a time, for part of the time.
So what's the bottleneck? The SD cards (mostly SanDisk Ultra+), or the slow processors (one the Zeros and 1B, not so much on the 2 or 3a+), or my slow wireless? Or all of the above...
If I left the images uncompressed, that's the only way to allow fast incremental backups, but they would keep taking up a lot of my small local storage space, and I would have to transfer them off again in full after each incremental.
RonR, were you assuming an external medium - a fast USB stick or external hard drive or even space on the network mounted over NFS or samba? Then the images could be created there, left there uncompressed, facilitating later incremental backups, and we avoid both compression and multiple transfers. Maybe this is a "duh!" - go ahead, I can take it...