I recently replaced an ancient laptop with a slightly less ancient one.
- host for backups for three other machines
- serve files I don’t necessarily need on the new machine
- relatively lightweight - “server” is ~15 years old
- relatively simple - I’d rather not manage a dozen docker containers.
- internal-facing
- does NOT need to handle Android and friends. I can use sync-thing for that if I need to.
Left to my own devices I’d probably rsync for 90% of that, but I’d like to try something a little more pointy-clicky or at least transparent in my dotage.
Edit: Not SAMBA (I freaking hate trying to make that work)
Edit2: for the young’uns: NFS (linux “network filesystem”)
Edit 3: LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect questions)
Last Edit: thanks, friends, for this discussion! I think based on this I’ll at least start with NFS + my existing backups system (Mint’s thing, which is I think just a gui in front of rcync). May play w/ modern SAMBA if I have extra time.
Ill continue to read the replies though - some interesting ideas.
NFS is the best option if you only need to access the shared drives over your LAN. If you want to mount them over the internet, there’s SSHFS.
See, this is interesting. I’m out here looking for the new shiny easy button, but what I’m hearing is “the old config-file based thing works really well. ain’t broken, etc.”
I may give that a swing and see.
I’m at the same age - just to mention, samba is nowhere near the horror show it used to be. That said, I use NFS for my Debian boxes and mac mini build box to hit my NAS, samba for the windows laptop.
Yeah, Samba has come a long way. I run a Linux based server but all clients are Windows or Android so it just makes sense to run SMB shares instead of NFS.
you and perhaps @curbstickle@lemmy.dbzer0.com, may I ask if you use samba with portable devices, like laptops?
I do and my experience is that programs that try to access it when I don’t have network access tend to freeze, including my desktop environment, but any file managers too if I click the wrong place by accident. but it occurs enough without user action too.
oh and it breaks all machines at once if the server or network is down. which is rare but very annoying.did you experience this too? do you have some advice? is SMB just unsuitable for this?
honestly I would prefer if the cifs driver would keep track of last successful communication, and if it was long ago instantly fail all accesses. without unmounting so that open directories and file handles keep being valid.
and if all software on this world wouldn’t behave as if they were doing IO on the main thread. honestly this went smoother with windows clients but I’m not going back.Only windows devices (laptop I use for work stuff). The other laptops are Linux (NFS) or Chromebooks (for the kids, no access).
Devices don’t really leave often (aside from mine), so not much of an issue. If the NAS is offline, not much else would be going on either.
Honestly no, that’s not really my use case. My PC is running over a 2.5G cable. Funny enough, my wife and kid’s laptops rarely leave the house. I have experienced some wait time if the server is down while the PC looks for it, but nothing so drastic as locking things up. That particular window will just be spinning for a bit trying to find the server over the network.
I’ve run Proxmox hosts with smb shares for literally a decade without issue. Performance is line speed now. Only issues I’ve ever had were operator error and that was a long time ago. SMB 3 works great.
What about NFS over the internet?
You can use NFS over the internet, but it will be a lot more work to secure it. It was intended for use over a LAN and performance may not be great over the internet, especially with high latency or packet loss.
I would just create a point to point VPN connection and run it over that (for axample an IPsec tunnel using strongswan)
I agree, NFS is eazy peazy, livin greazy.
I have an old ds211j synology for backup. I just can’t bring myself to replace it, it still works. However, it doesn’t support zfs. I wish I could get another Linux running on this thing.
However, NFS does work on it and is so simple and easy to lock down, it works in a ton of corner cases like mine.
I use exclusively sshfs, including in my lan, is there some downside to it?
SSHFS is slower than NFS due to the encryption and FUSE. It’s not a huge difference with a modern CPU and a 1 gbps connection, but it can be significant with an older CPU or a faster network.
I think a reasonable quorum already said this, but NFS is still good. My only complaint is it isn’t quite as user-mountable as some other systems.
So…I know you said no SAMBA, but SAMBA 4 really isn’t bad any more. At least, not nearly as shit as it was.
If you want a easily mountable filesystem for users (e.g. network discovery/etc.) it’s pretty tolerable.
NFS is still the standard. Were slowly seeing better adoption of VFS for things like hypervisors.
Otherwise something like SFTPgo or Copyparty if you want a solution that supports pretty much every protocol.
For smaller folders I like using syncthing, that way it’s like having multiple updated backups
Syncthing is neat, but you shouldn’t consider it to be a backup solution. If you accidentally delete or modify a file on one machine, it’ll happily propagate that change to all other machines.
I like this solution because I can have the need filled without a central server. I use old-fashioned offline backups for my low-churn, bulk data, and SyncThing for everything else to be eventually consistent everywhere.
If my data was big enough so as to require dedicated storage though, I’d probably go with TrueNAS.
I’d use an s3 bucket with s3fs. Since you want to host it yourself, Minio is the open-source tool to use instead of s3.
I hear good things about seaweedfs instead of minio these days
Oh, and if you want to use it as the backing store for a database consider obstore instead of s3fs: https://developmentseed.org/blog/2025-08-01-obstore/
NFS is pretty good
NFS is still useful. We use it in production systems now. If it ain’t broke, don’t fix it.
And if you have a dedicated system for this, I’d look into TrueNAS Scale.
Truenas Scale works well as long as you don’t want any dockers on it. Once you want to run docker images it is easier to install a VM on Truenas and run the docker from there than it is to try to set up custom “Apps”
Wut? I’ve got a bunch of dockerhub images running on a scale box
It is doable, but it is a pain if the docker requires any special config like permanent storage. Getting nginx up and running for mTLS was especially annoying
I still use sshfs. I can’t be bothered to set up anything else I just want something that works out of the box.
I like the sound of that!
However it looks like has a lot of potential for a ‘xz’ style exploit injection, so I’ll probably skip it.
From the project’s README.md : The current maintainer continues to apply pull requests and makes regular releases, but unfortunately has no capacity to do any development beyond addressing high-impact issues. When reporting bugs, please understand that unless you are including a pull request or are reporting a critical issue, you will probably not get a response.
I am 100% open to exploring other equally zero effort alternatives if only I had the time CURSE being an adult (ノಠ益ಠ)ノ . Is there anything better I should use, hopefully using existing ssh keys please.
Isn’t that super clunky ? I keep getting all kind of sluggishness, hangs and the occasional error every time I use that. It ends up working but wow, does it suck.
I mostly use samba / cifs clients and it’s fast and reliable with properly setup dns and using only the dns or IP address, not smbios or active directory those are overkill
NFS is really good inside a LAN, just use 4.x (preferably 4.2) which is quite a bit better than 2.x/3.x. It makes file sharing super easy, does good caching and efficient sync. I use it for almost all of my Docker and Kubernetes clusters to allow files to be hosted on a NAS and sync the files among the cluster. NFS is great at keeping servers on a LAN or tight WAN in sync in near real time.
What it isn’t is a backup system or a periodic sync application and it’s often when people try to use it that way that they get frustrated. It isn’t going to be as efficient in the cloud if the servers are widely spaced across the internet. Sync things to a central location like a NAS with NFS and then backups or syncs across wider WANs and the internet should be done with other tech that is better with periodic, larger, slower transactions for applications that can tolerate being out of sync for short periods.
The only real problem I often see in the real world is Windows and Samba (sometimes referred to as CIFS) shares trying to sync the same files as NFS shares because Windows doesn’t support NFS out of the box and so file locking doesn’t work properly. Samba/CIFS has some advantages like user authentication tied to active directory out of the box as well as working out of the box on Windows (although older windows doesn’t support versions of Samba that are secure), so if I need to give a user access to log into a share from within a LAN (or over VPN) from any device to manually pull files, I use that instead. But for my own machines I just set up NFS clients to sync.
One caveat is if you’re using this for workstations or other devices that frequently reboot and/or need to be used offline from the LAN. Either don’t mount the shares on boot, or take the time to set it up properly. By default I see a lot of people get frustrated that it takes a long time to boot because the mount is set as a prerequisite for completing the boot with the way some guides tell you to set it up. It’s not an NFS issue; it’s more of a grub and systemd (or most equivalents) being a pain to configure properly and boot systems making the default assumption that a mount that’s configured on boot is necessary for the boot to complete.
Thanks for that caveat. I could definitely see myself falling into that
Yeah, it’s easy enough to configure it properly, I have it set up on all of my servers and my laptop to treat it as a network mount, not a local one, and to try to connect on boot, but not require it. But it took me a while to understand what it was doing to even look for a solution. So, hopefully that saves you time. 🙂
If it’s for backup, zfs and btrfs can send incremental diffs quite efficiently (but of course you’ll have to use those on both ends).
Otherwise, both NFS and SMB are certainly viable.
I tried both but TBH I ended up just using SSHFS because I don’t care about becoming and NFS/SMB admin.
NFS and SMB are easy enough to setup, but then when you try to do user-level authentication… they aren’t as easy anymore.
Since I’m already managing SSH keys all over my machines, I feel like SSHFS makes much more sense for me.
The fact that you say using NFS makes you old makes me feel like fucking Yoda
I can’t decide if I’m happy or disappointed that no one suggested I make a Beyowolf cluster.
haha that really brings me back.
Everyone forgets about WebDAV.
It’s a little jank, but it does work on Windows. If you copy a file in, it doesn’t show up in the file manager until you refresh. But it works.
It’s also multithreaded, which isn’t the case for SMB. This is especially good if you host it on SSDs.
For linux only, lan only shared drive NFS is probably the easiest you’ll get, it’s made for that usecase.
If you want more of a dropbox/onedrive/google drive experience, Syncthing is really cool too, but that’s a whole other architecture qhere you have an actual copy on all machines.
I use NFS for linking VMs and Docker containers to my file server. Haven’t tried it for desktop usage, but I imagine it would work similarly.
truenas is cool. I’ve only used core so far, but i hear scale is taking over
this looks promising. Seems a little heavy-weight at first glance… How was it to get up and running?
the GUI makes it pretty painless. it was my first real attempt at self hosting anything, my first experience with any kind of NFS/SMB setup at all. i was running it as bare metal for around 2 years before using installing as a vm on proxmox.










