

You didn’t look very hard.
Cheap zigbee stuff exists everywhere. And zigbee is an open standard, so if it works, it will work until the equipment breaks.


You didn’t look very hard.
Cheap zigbee stuff exists everywhere. And zigbee is an open standard, so if it works, it will work until the equipment breaks.
The Linux kernel isn’t really much different between any distribution of Linux.
If it works on one, it works on the rest, in like 99% of cases.
The only real exception to that is custom distributions built specifically for a particular device or subset of devices.
In other words, for embedded devices, like phones, routers, TVs and such.
And those aren’t going to be running Ubuntu.


In my own experience, certain things should always be on their own dedicated machines.
My primary router/firewall is on bare metal for this very reason.
I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.
I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)
And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.
I didn’t see a point in removing it. So it’s there, just not automatically started.


My reasons for keeping OpnSense on bare metal mirror yours. But additionally I don’t want my network to take a crap because my proxmox box goes down.
I constantly am tweaking that machine…
Kensington TB550 thumb trackball for any pointy/clicky stuff, keyboard otherwise.
I don’t like a real mouse, and haven’t for decades.
I got my first thumb trackball in the 1990s when I didn’t have a big enough desk to use a mouse. It was a Logitech Trackman Marble.
Then I got a Marble Wheel. Then the cordless one, then the M570 that replaced it.
But Logitech build quality has really went into the shitter, and after a warranty replacement, and then having to buy another replacement, I tried a couple of different thumb trackballs before settling on my Kensington one.
The ProtoArc EM01 I have is also nice, but I like the feel of the Kensington better.


Obfuscation.
Nothing is truly unhackable. The difficulty lies in being unable to undo/retry any failed attempts because you don’t have an easy way to read or write to the hardware once you’ve done it wrong.
Which means if your attempt fails, this probably just means that you’re throwing the device away since you can’t fix it without access
Total lack of stability, not in “crashing programs” but in the entire idea of “throw it all out and start over” that seems to 100% infest every single Linux developer every few years.
Not to mention the total loss of every single bit of UNIX philosophy over the years.
“Everything’s a file.” ? Not according to Linux, not any more.
All the various *ctls necessary to run and inspect your system have completely gotten out of control.


When people talk about CPU limitations on the rPi, they aren’t talking about just the actual processing portion of the machine. There are also a lot of other corners cut for basically all SBCs. Including bus width and throughput.
The problem is that when you use a software raid, like ZFS, or it’s precursors, you are using far more than the CPU. You’re also using the data bus between the CPU and the IO controller.
“CPU usage” indicators don’t really tell you how active your data buses are, but how active your CPU is, in having to process information.
Basically, it’s the difference between IO wait states, and CPU usage.
The Pi is absolutely a poor choice for input/output, period. Regardless of your “metrics” tell you, it’s data bus simply does not have the bandwidth necessary to control several hard drives at once with any sort of real world usability.
You’ve wasted your money on an entire ecosystem by trying to make it do something it wasn’t designed, nor has the capability, to do.


If you can’t filet fish, then fill a chair and let me teach you.


Get Oracle cloud free VPS. Create a VPN connection from your server to it.
Set up port forwarding from your VPS to your server. Connect to your server using your vps’ IPv4 address.
Done.
Works better than a proxy, for sure.


zfs list shows datasets.
zpool list shows pools.
mirror is a type of pool.


You sure that’s what is happening, and it’s not just mounting a different snapshot/dataset being mounted “on top” ?
I’ve seen it happen, which is why I ask. Assume the root dataset is named pool0 and has set0 set1 and set1/set2 as child datasets.
Their mount points are as follows:
/pool0/set0
/pool0/set1
/pool0/set1/set2
Now, if somehow, say set2 gets unmounted.temporarily, and you save files to /pool0/set1/set2 while the data set is not mounted, it’ll actually put those files in the set1 dataset, under the set2 directory.
But, when you mount the pool0/set1/set2 dataset again, the files under the set1 dataset are hidden by the set2 child.
Am I explaining it well enough for you to follow along?
Make sure you don’t have some similar situation by temporarily unmounting any nested datasets and ls’ing their mount points.


That I’m not sure of, I know MASS uses snapcast internally, and can stream to LMS/squeezelite players.
I also wouldn’t call it 100% flawless, but it works well enough for me.


My favorite use case for MASS is my desktop PC, in our bedroom, has a decent 5.1 sound system, and is running squeezelite.
I have an alarm automation that starts playing from the random 500 playlist.
When my phone connects to my car’s Bluetooth, it transfers the queue to my phone, which is running snapcast on a VPN.
When my phone disconnects from my Bluetooth, and I am at home, it transfers any queue from my phone back to my desktop.
If I’m not at home it just stops the music, otherwise it’ll start playing through the phone speakers.
While at work, I’ll use Ultrasonic rather than music assistant, because my data inside my work area is sporadic, and not conducive to a good musical experience with how MASS streams.


Damn, and I thought my 30k plus tracks was pretty large. I use Navidrome as a server and slskd as well


Hello fellow hass/mass user. Also, what sort of low voltage?
Fire/security alarms? Or access control? (Or both)?
I do both, as well as CCTV.


Real world usage tells me all I need to know.


SMB isn’t really all that slow these days.
I have NFS and SMB shares set up (same directory) and copying files to/from them maxes out my gigabit LAN.
SSH on the other hand is slower, because there’s more CPU overhead.


Another option is virtualization.
Separate your sharing server from your remote access server, then have them each connect to a different VPN
This is how I do it.
I have several containers and a couple of VMs running on my one server. My router is also currently virtualized (OpnSense running in a Proxmox VM, works great) and it connects to tailscale and uses subnet routing to allow my LAN devices to be accessed from outside my home without putting them directly on the Internet.
Meanwhile, I have two more VPNs set up for two different sharing servers. All on one physical machine.
And that’s the fault of whoever uses those hubs. You can use practically any zigbee hub you wish. Zigbee is zigbee.