I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync periodically, but I’m curious what other solutions may exist.

I’m interested in both command-line, and GUI solutions.

  • mariom@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 years ago

    Is it just me or the backup topic is recurring each few days on !linux@lemmy.ml and !selfhosted@lemmy.world?

    To be on topic as well - I use restic+autorestic combo. Pretty simple, I made repo with small script to generate config for different machines and that’s it. Storing between machines and b2.

  • elscallr@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 years ago

    Exactly like you think. Cronjob runs a periodic rsync of a handful of directories under /home. My OS is on a different drive that doesn’t get backed up. My configs are in an ansible repository hosted on my home server and backed up the same way.

  • HughJanus@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    I don’t, really. I don’t have much data that is irreplaceable.

    The ones that are get backed up manually to Proton Drive and my NAS (manually via SMB).

  • KitchenNo2246@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    All my devices use Syncthing via Tailscale to get my data to my server.

    From there, my server backs up nightly to rsync.net via BorgBackup.

    I then have Zabbix monitoring my backups to make sure a daily is always uploaded.

  • donio@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    Restic since 2018, both to locally hosted storage and to remote over ssh. I’ve “stuff I care about” and “stuff that can be relatively easily replaced” fairly well separated so my filtering rules are not too complicated. I used duplicity for many years before that and afbackup to DLT IV tapes prior to that.

  • rodbiren@midwest.social
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    Use synching on several devices to replicate data I want to keep backups of. Family photos, journals, important docs, etc. Works perfect and I run a relay node to give back to the community given I am on a unlimited data connection.

    • stewsters@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      I use syncthing for my documents as well. My source code is in GitHub if it’s important, and I can reinstall everything else if I need.

  • to_urcite_ty_kokos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Git projects and system configs are on GitHub (see etckeeper), the reset is synced to my self-hosted Nextcloud instance using their desktop client. There I have periodic backup using Borg for both the files and Nextcloud database.

  • okda@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    Check out Pika backup. It’s a beautiful frontend for Borg. And Borg is the shit.

  • akash_rawal@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I use rsync+btrfs snapshot solution.

    1. Use rsync to incrementally collect all data into a btrfs subvolume
    2. Deduplicate using duperemove
    3. Create a read-only snapshot of the subvolume

    I don’t have a backup server, just an external drive that I only connect during backup.

    Deduplication is mediocre, I am still looking for snapshot aware duperemove replacement.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      I’m not trying to start a flame war, but I’m genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so “not ready for prime time”.

      • Rockslide0482@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        I’ve only ever run ZFS on a proxmox/server system but doesn’t it have a not insignificant amount of resources required to run it? BTRFS is not flawless, but it does have a pretty good feature set.

      • akash_rawal@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.

        (All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)

  • HarriPotero@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 years ago

    I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.

    For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.