I’m planning on setting up a nas/home server (primarily storage with some jellyfin and nextcloud and such mixed in) and since it is primarily for data storage I’d like to follow the data preservation rules of 3-2-1 backups. 3 copies on 2 mediums with 1 offsite - well actually I’m more trying to go for a 2-1 with 2 copies and one offsite, but that’s besides the point. Now I’m wondering how to do the offsite backup properly.
My main goal would be to have an automatic system that does full system backups at a reasonable rate (I assume daily would be a bit much considering it’s gonna be a few TB worth of HDDs which aren’t exactly fast, but maybe weekly?) and then have 2-3 of those backups offsite at once as a sort of version control, if possible.
This has two components, the local upload system and the offsite storage provider. First the local system:
What is good software to encrypt the data before/while it’s uploaded?
While I’d preferably upload the data to a provider I trust, accidents happen, and since they don’t need to access the data, I’d prefer them not being able to, maliciously or not, so what is a good way to encrypt the data before it leaves my system?
What is a good way to upload the data?
After it has been encrypted, it needs to be sent. Is there any good software that can upload backups automatically on regular intervals? Maybe something that also handles the encryption part on the way?
Then there’s the offsite storage provider. Personally I’d appreciate as many suggestions as possible, as there is of course no one size fits all, so if you’ve got good experiences with any, please do send their names. I’m basically just looking for network attached drives. I send my data to them, I leave it there and trust it stays there, and in case too many drives in my system fail for RAID-Z to handle, so 2, I’d like to be able to get the data off there after I’ve replaced my drives. That’s all I really need from them.
For reference, this is gonna be my first NAS/Server/Anything of this sort. I realize it’s mostly a regular computer and am familiar enough with Linux, so I can handle that basic stuff, but for the things you wouldn’t do with a normal computer I am quite unfamiliar, so if any questions here seem dumb, I apologize. Thank you in advance for any information!
Syncthing to a pi at my parents place.
Low power server in a friends basement running syncthing
But doesn’t that sync in real-time? Making it not a true backup?
Agreed. I have it configured on a delay and with multiple file versions. I also have another pi running rsnapshot (rsync tool).
How’d you do that?
For the delay, I just reduce how often it checks for new files instead of instantaneously.
In theory you could setup a cron with a docker compose to fire up a container, sync and once all endpoint jobs are synced to shut down.
As it seemingly has an API it should be possible.You could use snapshots to provide the backup portion.
Have it sync the backup files from the -2- part. You can then copy them out of the syncthing folder to a local one with a cron to rotate them. That way you get the sync offsite and you can keep them out of the rotation as long as you want.
Huh, that’s a pretty good idea. I already have a Raspberry Pi setup at home, and it wouldn’t be hard to duplicate in other location.
I use borg backup. It, and another tool called restic, are meant for creating encrypted backups. Further, it can create backups regularly and only backup differences. This means you could take a daily backup without making new copies of your entire library. They also allow you to, as part of compressing and encrypting, make a backup to a remote machine over ssh. I think you should start with either of those.
One provider thats built for being a cloud backup is borgbase. It can be a location you backup a borg (or restic I think) repository. There are others that are made to be easily accessed with these backup tools.
Lastly, I’ll mention that borg handles making a backup, but doesn’t handle the scheduling. Borgmatic is another tool that, given a yml configuration file, will perform the borgbackup commands on a schedule with the defined arguments. You could also use something like systemd/cron to run a schedule.
Personally, I use borgbackup configured in NixOS (which makes the systemd units for making daily backups) and I back up to a different computer in my house and to borgbase. I have 3 copies, 1 cloud and 2 in my home.
I’m just skipping that. How am I going to backup 48TB on an off-site backup?!
Only back up the essentials like photos and documents or rare media.
Don’t care about stuff like Avengers 4K that can easily be reaquiredThis is what I’m currently doing, I use backblaze b2 for basically everything that’s not movies/shows/music/roms, along with backing up my docker stacks etc to the same external drive my media’s currently on.
I’m looking at a few good steps to upgrade this but nothing excessive:
- NAS for media and storing local backups
- Regular backups of everything but media to a small USB drive
- Get a big ass external HDD that I’ll update once a month with everything and keep in my storage unit and ditch backblaze
Not the cleanest setup but it’ll do the job. The media backup is definitely gonna be more of a 2-1-Pray system LMAO but at least the important things will be regularly taken care of
I don’t have Avengers 4K. It’s all just Linux ISOs.
NAS at the parents’ house. Restic nightly job, with some plumbing scripts to automate it sensibly.
This is mine exactly. Mine send to backblaze b2
I don’t 🙃
Rsync to a Hetzner storage box. I dont do ALL my data, just the nextcloud data. The rest is…linux ISOs… so I can redownload at my convenience.
I have a job, and the office is 35km away. I get a locker in my office.
I have two backup drives, and every month or so, I will rotate them by taking one into the office and bringing the other home. I do this immediately after running a backup.
The drives are LUKS encrypted btrfs. Btrfs allows snapshots and compression. LUKS enables me to securely password protect the drive. My backup job is just a btrfs snapshot followed by an rsync command.
I don’t trust cloud backups. There was an event at work where Google Cloud accidentally deleted an entire company just as I was about to start a project there.
The easiest offsite backup would be any cloud platform. Downside is that you aren’t gonna own your own data like if you deployed your own system.
Next option is an external SSD that you leave at your work desk and take home once a week or so to update.
The most robust solution would be to find a friend or relative willing to let you set up a server in their house. Might need to cover part of their electric bill if your machine is hungry.
Hetzner Storagebox
Just recently moved from an S3 cloud provider to a storagebox. Prices are ok and sub accounts help clean things up.
My ratchet way of doing it is Backblaze. There is a docker container that lets you run the unlimited personal plan on Linux by emulating a windows environment. They let you set an encryption key so that they can’t access your data.
I’m sure there are a lot more professional and secure ways to do it, but my way is cheap, easy, and works.
What’s the container’s name? I was about to get backblaze and then was frustrated at the cost difference between the desktop personal plan and the one for deploying on my server
I used to say restic and b2; lately, the b2 part has become more iffy, because of scuttlebutt, but for now it’s still my offsite and will remain so until and unless the situation resolves unfavorably.
Restic is the core. It supports multiple cloud providers, making configuration and use trivial. It encrypts before sending, so the destination never has access to unencrypted blobs. It does incremental backups, and supports FUSE vfs mounting of backups, making accessing historical versions of individual files extremely easy. It’s OSS, and a single binary executable; IMHO it’s at the top of its class, commercial or OSS.
B2 has been very good to me, and is a clear winner for this is case: writes and space are pennies a month, and it only gets more expensive if you’re doing a lot of reads. The UI is straightforward and easy to use, the API is good; if it weren’t for their recent legal and financial drama, I’d still unreservedly recommend them. As it is, you’d have you evaluate it yourself.
I’m running the same setup, restic -> b2. Offsite I have a daily rclone job to pull (the diffs) from b2. Works perfectly, cost is < 1€ per month.
so if any questions here seem dumb
Not dumb. I say the same, but I have a severe inferiority complex and imposter syndrome. Most artists do.
1 local backup 1 cloud back up 1 offsite backup to my tiny house at the lake.
I use Synchthing.
I just use
restic.I’m pretty sure it uses checksums to verify data on the backup target, so it doesn’t need to copy all of the data there.
- wireguard
- rsync
- zfs
I use rsync.net
It’s not the lowest price, but I like the flexibility of access.
For instance, I was able to run rclone on their servers to do a direct copy from OneDrive to rsync.net, 400Gb without having to go through my connection.
I can mount backups with sshfs if I want to, including the daily zfs snapshots.











