• 0 Posts
  • 228 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle
  • I must have gotten one after the enshitification. I bought a HiSense TV during the pandemic and the unit I got was trouble from nearly day 1. A line of pixels went dead all the way across the screen. I tried to work with their warranty department and they asked for a picture of the problem.

    Ok, easy enough. Take the picture and send. They reply, “can you take a picture with better lighting of the bezel?” Ok, no problem. Gerry better lighting, snap picture, send off. They reply, “can you get better lighting on the bezel?” Seriously? Fine, get the TV under really good lighting, take picture, send. “Can you get better lighting on the bezel?” WTF? Ok, I’ll admit I don’t have 50,000 candle power spot lights on it, but this is just obvious stalling. Each round of pictures and request for more is taking weeks.

    During this time, the TV OS sees several updates and the underpowered nature of the system is starting to slow. The menus aren’t just sluggish, they are downright unusable. The home screen is now half ads. I finally decided, “fuck it” took the TV to the dump and bought something else.

    Thankfully, the TV was only around $500. Not cheap, but the cost of the education in not buying crap didn’t hurt too much.

    tl;dr: Fuck HiSense



  • Let’s ignore the pedantic issues of “there is no surface”, “there is no sun to rise” or “you’d be dead so insanely fast you probably wouldn’t notice”. Assuming you were magically teleported and held protected just above the event horizon of a black hole, it would be so bright you’d go blind almost instantly. Not because of any star coming over the horizon, but because the accretion disk would just be that bright. If you look at NASA’s pictures of M87, you aren’t actually seeing the black hole. There’s nothing there to see. Instead, what you are seeing in the pictures is the accretion disk around the black hole. As matter gets closer to the event horizon, it accelerates and all that stuff starts bumping into each other. At the energies involved, this produces electromagnetic radiation of basically every energy. There is infrared right up through x-ray, included lots and lots of visible light. And this is happening on a scale which is so mind mindbogglingly big that words really just fail to capture it. Here is an artistic representation with our solar system for scale. Pluto’s orbit would be well inside the event horizon. There is an insane amount of light and energy in that accretion disk. And thanks to the blackhole warping light around itself, you would be getting bombarded by its energy from every angle, including the disk on the opposite side of the black hole. In short, it would be really bright.


  • If we’re aiming more towards realism, there are many reasons no modern military fields anything which looks like a mech. Not the least of which is tall, thin objects stick out on a battlefield and becomes targets. If you want an armored vehicle with a big gun, you build it low to the ground and end up with a tank. More survivability usually boils down to two factors:

    1. Lower observability
    2. More armor/defense

    You don’t die if you don’t get shot, and if you do get shot at you really, really want to prevent whatever hit you from penetrating in and killing the crew and/or disabling the vehicle.

    Mechs, with spindly legs end up high above the ground and those legs become obvious targets given the complexity of making a leg work. You’d want to reduce the height, meaning shorter legs. Then you want to not have something as horridly complex as an actuating knee or hip. So, let’s just use a tracked drive or wheel instead. At for the top, why arms? Again, too much complexity, just a single rotating turret would be simpler and easier to shield. That head thing can be reduced to a sensor mast and we’ll just make the sensors omnidirectional to avoid the whole “make it spin” complexity. And um, we just built a tank. Sure, there is some advantage to walking vehicles, and they might make sense on a small scale or in support roles where they are much less likely to come under fire. But for a front-line armored vehicle, I’d buy tanks.

    At the same time, mechs look cool.


  • You could try using Autopsy to look for files on the drive. Autopsy is a forensic analysis toolkit, which is normally used to extract evidence from disk images or the like. But, you can add local drives as data sources and that should let you browse the slack space of the filesystem for lost files. This video (not mine, just a good enough reference) should help you get started. It’s certainly not as simple as the photorec method, but it tends to be more comprehensive.


  • As @MelRose@lemmy.blahaj.zone pointed out, this seems to be a cover for c’t magazine. Specifically it seems to be for November 2004. heise.de used to have a site which let you browse those covers and you could pull any/all of them. But, that website seems to have died sometime in 2009. Thankfully, the internet remembers and you can find it all on archive.org right here. You may need to monkey about with capture dates to get any particular cover, but it looks like a lot of them are there.

    Also, as a bit of “teach a person to fish”, ImgOps is a great place to start a reverse image search. It can often get you from an image to useful information about that images (e.g. a source) pretty quick. I usually use the TinEye reverse image search for questions like this.



  • I run Pi-Hole in a docker container on my server. I never saw the point in having a dedicated bit of hardware for it.
    That said, I don’t understand how people use the internet without one. The times I have had to travel for work, trying to do anything on the internet reminded me of the bad old days of the '90s with pop-ups and flashing banners enticing me to punch the monkey. It’s just sad to see one of the greatest communications platforms we have ever created reduced to a fire-hose of ads.


  • While this patch might stop some existing attacks, it’s not really a fix. First off, the type of people who might install a third party Windows patch are probably the exact same people who would be cautious about clicking on an LNK file embedded in a ZIP file. Second, even if this patch somehow became widespread, attackers would just shift their attacks into the 260 character limit. Sure, it would now be visible in the properties, people aren’t looking at the properties of LNK files.

    The problem is this “vulnerability” is essentially “as designed”. LNK files exist to allow both pointers to other files and a quick way to run complex commands. It’s like calling powershell.exe a vulnerability, because it can be used to get up to all sorts of malicious stuff. Both are powerful tools on Windows, but those tools can be abused.


  • First off, why does a beer company have personal data on customers? It seems like the best protection for this data would be, don’t have it in the first place. You sell beer, you don’t need to hoover up personal data on people to make and sell beer.

    “That reflects a wider truth that companies are investing more than ever in digital defences, yet adversaries continue to outpace them, exploiting weak links in supply chains or breaking in through trusted partners,” he (Shankar Haridas, head of UK and Ireland at ManageEngine) added.

    Ya, they are spending money, but failing at basic cyber hygiene (read: documentation, patching and network segmentation). But hey, I Mr. ManageEngine here will be happy to sell us another product which just papers over the failures to get the basics done. And it will almost certainly have “Agentic AI” to do…something.

    The compromise seems to have started with network equipment at one site, impacting the OT environment and potentially expanding into IT systems

    I’d bet a lot of money the Asahi security team had been screaming about the OT environment being a big, juicy target for a long time. But, applying security controls in the OT environment is hard and scary and might cause a blip in production. So nope, all those shit-boxes running Windows XP must never be touched. Also, NDR is expensive and hard, so stop asking about it. But yes, those same shit-boxes really do need to be fully internet connected and logged on 24x7 as a local admin, with the same password everywhere, because identity management is hard.

    We seriously need to start dragging CTOs, CIOs and CEOs out into the street, tarring and feathering them when this shit happens. Also, the companies making the OT systems need to have their entire management put through a chipper shredder the first time one of them suggests that their systems just shouldn’t be patched. If your shit is so fragile that an OS patch might break something, chipper shredder goes BRRRR…

    Sorry, OT systems are a bit of a pain point.




  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.



  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • It’s going to depend on what types of data you are looking to protect, how you have your wifi configured, what type of sites you are accessing and whom you are willing to trust.

    To start with, if you are accessing unencypted websites (HTTP) at least part of the communications will be in the clear and open to inspection. You can mitigate this somewhat with a VPN. However, this means that you need to implicitly trust the VPN provider with a lot of data. Your communications to the VPN provider would be encrypted, though anyone observing your connection (e.g. your ISP) would be able to see that you are communicating with that VPN provider. And any communications from the VPN provider to/from the unencrypted website would also be in the clear and could be read by someone sniffing the VPN exit node’s traffic (e.g. the ISP used by the VPN exit node) Lastly, the VPN provider would have a very clear view of the traffic and be able to associate it with you.

    For encrypted websites (HTTPS), the data portion of the communications will usually be well encrypted and safe from spying (more on this in a sec). However, it may be possible for someone (e.g. your ISP) to snoop on what domains you are visiting. There are two common ways to do this. The first is via DNS requests. Any time you visit a website, your browser will need to translate the domain name to an IP address. This is what DNS does and it is not encrypted by default. Also, unless you have taken steps to avoid it, it likely your ISP is providing DNS for you. This means that they can just log all your requests, giving them a good view of the domains you are visiting. You can use something like DNS Over Https (DOH), which does encrypt DNS requests and goes to specific servers; but, this usually requires extra setup and will work regardless of using your local WiFi or a 5g/4g network. The second way to track HTTPS connections is via a process called Server Name Identification (SNI). In short, when you first connect to a web server your browser needs to tell that server which domain it wants to connect to, so that the server can send back the correct TLS certificate. This is all unencrypted and anyone inbetween (e.g. your ISP) can simply read that SNI request to know what domains you are connecting to. There are mitigations for this, specifically Encrypted Server Name Identification (ESNI), but that requires the web server to implement it, and it’s not widely used. This is also where a VPN can be useful, as the SNI request is encrypted between your system and the VPN exit node. Though again, it puts a lot of trust in the VPN provider and the VPN provider’s ISP could still see the SNI request as it leaves the VPN network. Though, associating it with you specifically might be hard.

    As for the encrypted data of an HTTPS connection, it is generally safe. So, someone might know you are visiting lemmy.ml, but they wouldn’t be able to see what communities you are reading or what you are posting. That is, unless either your device or the server are compromised. This is why mobile device malware is a common attack vector for the State level threat actors. If they have malware on your device, then all the encryption in the world ain’t helping you. There are also some attacks around forcing your browser to use weaker encryption or even the attacker compromising the server’s certificate. Though these are likely in the realm of targeted attacks and unlikely to be used on a mass scale.

    So ya, not exactly an ELI5 answer, as there isn’t a simple answer. To try and simplify, if you are visiting encrypted websites (HTTPS) and you don’t mind your mobile carrier knowing what domains you are visiting, and your device isn’t compromised, then mobile data is fine. If you would prefer your home ISP being the one tracking you, then use your home wifi. If you don’t like either of them tracking you, then you’ll need to pick a VPN provider you feel comfortable with knowing what sites you are visiting and use their software on your device. And if your device is compromised, well you’re fucked anyway and it doesn’t matter what network you are using.


  • No, a game should be what the devs decide to make. That said, it can cut off a part of the market. I’m another one of those folks who tends to avoid PvPvE games, without a dedicated PvE only side. This weekend’s Arc Raiders playtest was a good example. I read through the description on Steam and just decided, “na, I have better things to do with my time.” Unfortunately, those sorts of games tend to have a problem with griefers running about directly trying to ruin other peoples’ enjoyment. I’ll freely admit that I will never be as good as someone who is willing to put the hours into gear grinding, practice and map memorization in such a game. I just don’t enjoy that and that means I will always be at a severe disadvantage. So, why sped my time and money on such a game?

    This can lead to problem for such games, unless they have a very large player base. The Dark Souls series was a good example, which has the in-built forced PvP system, though you can kinda avoid it for solo play. And it still has a large player base. But, I’d also point out some of the the controversy around the Seamless Co-op mod for Elden Ring. When it released, the PvP players were howling from the walls about how long it made invasion queues. Since Seamless Co-op meant that the players using it were removed from the official servers, the number of easy targets to invade went way, way down. It seemed like a lot of folks like to have co-op, without the risks of invasion.

    As a longer answer to this, let me recommend two videos from Extra Credits:

    These videos provide a way to think about players and how they interact with games and each other.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.