I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 years ago

    Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 years ago

    It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.

    TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 years ago

    Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.

    Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.

  • Opisek@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    As others said, the initial setup may consume some time, but once it’s running, it just works. I dockerize almost everything and have automatic backups set up.

  • dlundh@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.

  • mikyopii@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    For some reason my DNS tends to break the most. I have to reinstall my Pi-hole semi-regularly.

    NixOS plus Docker is my preferred setup for hosting applications. Sometime it is a pain to get running but once it does it tends to run. If a container doesn’t work, restart it. If the OS doesn’t work, roll it back.

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    I spend a huge amount of time configuring and setting up stuff as it’s my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.

  • Crogdor@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 years ago

    If you’re not publicly exposing things? I can go months without touching it. Then go through and update everything in an hour or so on the weekend.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 years ago

        Generally, no. Most of the time the updates work without a hitch. The the exception of Nextcloud, which will always break during an upgrade.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    If you set it up really well, you’ll probably only need to invest maybe an hour or so every week or two. But it also depends on what kind of maintenance you mean. I spend a lot of time downloading things and putting them in the right place so that my TV is properly entertaining. Is that maintenance? As for updating things, I’ve set up most of that to be automatic. The stuff that’s not automatic, like pulling new docker images, I do every couple weeks. Sometimes that involves running update scripts or changing configs. Usually it’s just a couple commands.

    • ALostInquirer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Yeah, to clarify I don’t mean organizing/arranging files as a part of maintenance, moreso handling different installs/configs/updating. Sometimes since more folks come around to ask for help it can appear as if it’s all much more involved to maintain than it may otherwise be (with a mix of the right setups and knowledge to deal with any hiccups).

  • N-E-N@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    Maybe 1 hr every month or two to update things.

    Thinks like my opnsense router are best updated when no one else is using the network.

    The docker containers I like to update manually after checking the release logs. Doesn’t take long and I often find out about cool new features perusing the release notes.

    Projects will sometimes have major updates that break things and I strongly prefer having everything super stable until I have time to sit down and update.

    11 stacks, 30+ containers. Borg backups runs automatically to various repositories. Zfs auto snap snot also runs automatically to create rapid backups.

    I use unraid as a nas and proxmox for dockers and VMs.

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So -

    • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
    • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
    • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
    • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
    • Yearly: visit the remotes and have a proper check/clean up/updates
    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      I moved form next cloud to seafile. The file sync is so much better than next cloud and own cloud.

      It has a normal windows client and also a mount type client (seadrive) which is also amazing for large libraries.

      I have mine setup with oAuth via Authentik and it works super well.