• 0 Posts
  • 20 Comments
Joined 2 months ago
cake
Cake day: January 28th, 2026

help-circle
  • There’s probably a better way, but the way that works for me is apt show <package> and then copying everything from the Recommended section into an apt install command

    Edit: people in forums are suggesting the simpler apt install --reinstall --install-recommends <pkg>.

    I find this preferable because it means the recommended packages get marked as auto, which means an uninstall will automatically remove them.

    On the other hand, it forces a redownload and install of <package> which might be unwanted. If you want the best of both worlds, you’re going to have to manually install the recommended packages, then also manually apt-mark auto <list of packages>—although that might make them immediately susceptible to an autoremove, so this might require some tweaking; I’ll work it out when I have time.

    If you want to always install recommended packages, add APT::Install-Recommends "1"; to your apt.conf (which just includes the --install-recommends option by default, behind the scenes)


  • I agree with the sentiment, but I do have to argue with some of his points (because it’s fun, and it’s okay to do things just for the hell of it). Excellent point, ineffectively articulated

    I wrote my MSc on The Metaverse. Learning to built VR stuff was fun, but a complete waste of time. There was precisely zero utility in having gotten in early… But I’m struggling to think of anyone who has earned anything more than bragging rights by being first.

    You’re your own counterexample. You got to experience the metaverse when it was still alive, which you wouldn’t have if you had waited for just a few years. And you got a Masters Degree out of it, not just bragging rights.

    But I’m struggling to think of anyone who has earned anything more than bragging rights by being first. Some early investors made money

    So you’re not struggling that much if you can start of the next sentence with an example of people who earned more than just bragging rights.

    But I’m struggling to think of anyone who has earned anything more than bragging rights by being first. Some early investors made money - but an equal and opposite number lost money.

    This grossly overestimates the ratio of successes to failures. You’re muuuuuuch more likely to lose money on the gamble of The Next Big Thing than win; for every HTTP there’s a Gopher and Usenet and a dozen others that all look the same from the outside looking in.

    For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.

    Flash is a terrible example, it ran its lifecycle already, sure, but it was HUGE back in the day. And people benefited from using it; some of my favorite animators and gamedevs cut their teeth on Flash, people’s work got recognized by a global audience, people landed jobs, Flash made it onto cable TV channels, people still light up at the mention of Homestar Runner to this day. People also made money, sure, but there are more benefits to playing with tech than “it makes money happen.”

    Which brings me to my final gripe: this is all framed as if the only benefit of a technology is if it’s productive or profitable. When you discuss your favorite show with friends, are you considering whether the conversation can be converted into capital? When you watch a beautiful sunset, do you fret over whether the clouds will help you achieve your quarterly goals? Out on dates with your SOs, do you have to take a break in the bathroom to worry whether the evening is meeting KPIs?

    Sometimes the benefit of things is just having the experience, instead of treating it as a means to an end. Yeah, don’t let the FOMO ruin your day, but maybe take some time to play around with a doomed technology before it becomes abandoned and the community ceases to be. Maybe you’ll become a recognized expert, maybe you’ll learn some valuable lessons you can transfer to tech with more longevity, or maybe you’ll just have fun.

    And honestly, whats the fucking point of living, working and grinding and suffering, if not for the fun in between it all?


  • I had very few issues with a GTX 970 and i7-4790k. The only issues I hear about with either any more is the linux kernel not supporting some of the features of newer GPUs (e.g. I know ray-tracing was a pain-point at one point).

    I don’t like recommending distros based on such a general use case, mainly because every distro can be tweaked and configured to exactly what you want. Instead, you should research the different mainline distros that have been around for decades—Arch, Debian, Fedora, Gentoo, Guix, NixOS, OpenSuse, Slackware—and see what they’re about, what sets them apart from others, what the maintainers’ philosophies are, and what kind of package management system they work with. Once one sounds better than the others, look into it and try it out.

    #Dos and Don’ts:

    Don’t try a niche distro. They are harder to troubleshoot and less likely to be actively maintained.

    Don’t use Ubuntu. It’s just a suckier version of Debian. It used to be user-friendly Debian, but now Debian is more user-friendly than it.

    Don’t dual-boot with windows. This just solidifies your reliance on windows, especially if you’re the type to give up on problem-solving issues that you didn’t have in Windows. It also can cause issues with making Linux unbootable.

    Do try a live usb with persistence before you commit entirely. It’s not exactly the same as a complete install, but it’s close enough to let you know how the OS feels and what hardware will or won’t work with it. Some people say try a VM first, but that won’t have direct hardware access.

    Do problem solve the little things. Anything that irks you or bothers you or just slows down your workflow. It doesn’t have to be an actual bug or glitch, just anything that could be better. This not only solidifies the feeling of ownership over your OS—you no longer have to settle for anyone else’s lousy design choices—it teaches you the resources for troubleshooting larger issues.

    Do plan around things not being plug and play at first. Want to test if a game runs on Linux? Great, set aside a couple of hours beforehand: first to install steam and set it up, then to figure out Proton, then to troubleshoot the game not even booting up, then to fix any glitches or whatnot, then to get your controller working. This won’t always be the case, but it will irk you a lot less when it is if you expect it. The more you make time for solving these issues now, the less time they’ll take up in the future (either they’ll be gone, or you’ll immediately know how to fix them, or your troubleshooting will be more streamlined).

    Do set aside time to learn about Linux “under the hood.” You don’t have to become a computer scientist, but it will save you a lot of headaches, show you cool things you can do, and make your computer a smoother experience. It especially helps if you take the time to learn as they come up: e.g. installer asks you what “bootloader” you want, but you’re not sure what that is, what it does, or why it’s necessary? Now’s the best time to take a little learning detour.

    Do ask questions on forums.

    Don’t listen to the people who shame you for asking.

    Do listen to the people who try to show you a better way of doing things, even if it’s not your way.




  • This article was more constructive (suggesting alternatives) than destructive (leveraging critiques), but it did link to several critiques/vulnerabilities with OpenPGP.

    Unfortunately, half are about implementation issues (granted, it’s made more difficult to implement something correctly when it’s as convoluted and all-encompassing as PGP)—which are hopefully not applicable to Delta due to their 3rd party, applied cryptography audit—and the rest are obsolesced by the 2024 updates to the standard—RFC 9580, the so-called “crypto-refresh.”

    Do you have any critiques that address the current state of the PGP protocol’s security?






  • Sorry for the delay, I’m a check-the-feed-once-a-week type lemming. I love computers, and I admire anyone who also wants to learn; I’m by no means an expert, but I am happy to share what I know.

    A distro is a whole OS. An Operating System, as the name implies, is the whole system of software that makes a computer function and interactive (take input from a user, and respond appropriately). This may software definitely includes a Kernel (more on that later), but may also include things like a Display Server (software that acts as an intermediary between gui software and the display/screen), a Desktop Environment (a subsystem of related softwares that handle things like window styles, layouts, and icons), or even utilities (programs that you use to modify the behavior of other programs/processes).

    In OSes, there is a fuzzy boundary between programs the user runs, and the low-level processes that run on hardware. This boundary separates “User Space” – programs and processes that run on behalf of the user – from “Kernel Space” – programs and processes that handle the hardware the machine is run on. Where most programs that you interact with are User Space – such as web browsers, video games, multimedia programs, or even most command-line programs – Kernel Space programs are ones that perform tasks like determining how memory is managed, or what processes are running during any given CPU cycle. The Kernel is the set of software that is reponsible for all this “behind the scenes” computer management. This means that the programs don’t have to be written to determine the specifics of the hardware they’re running on, it means that each program you run is much less likely to crash your PC, and it means that it’s a lot harder for malicious software to do serious damage to your PC or OS or other programs.

    So that’s the Cliff’s Notes, now the ELI5 analogy version: an Operating System is like a grocery store. The Desktop Environment are all the visual elements that go into the experience, stuff like branding, signs, employee uniforms, displays, even the way the shelves are laid out. The customers are the userspace programs, and that means the employees (and the automated systems that help run the store) are the Kernel. Because the relationship between the customers and employees mostly revolves around the merchandise being sold, the merchandise will be analogous to the computer’s physical resources.

    A customer can come in, select what goods they want, and check out, but they can’t stock the shelves themselves, nor order something that isn’t stocked, nor adjust prices, nor open the store if it’s closed. To do any of that, they need to ask an employee to perform those actions, and find a way to deal with it if the employee won’t or can’t. This also means the employees are responsible for opening the store, getting everything ready for the customers, cleaning up after the customers, and locking up the store after everyone’s left. This makes it easy for the customers, because they don’t have to bother with all the work that goes into shipping, pricing, stocking, theft, etc., nor do they have to worry about dealing with every possible type of shampoo they might come across depending on which grocery store they go to.

    Also - I have Linux mint - does that tell you what 3 I have and if not how can I find out?

    I can figure out 2 of the 3. Linux Mint is the distro / OS, and it runs on the Linux kernel. This is why distros like Arch and Debian and Linux Mint and Nix all get lumped together under the “Linux” label: they all run on the same kernel (and follow the same standards of OS design known as POSIX).

    The Desktop Environment (DE) you have depends on which ISO you (or your friend) downloaded from the website, the editions are named by the DE–e.g., if you installed Cinnamon Edition, then Cinnamon is your DE. The other easiest way to tell is to run the terminal command inxi -Sand remember to check the man page for it (man inxi, or online) before running random commands from the internet if you don’t know what they do – and then checking what it says under the section labeled "Desktop: "


  • I’m going to comment again, not to be an asshole, but because this is an entirelt separate stream of thoughts from my previous comment:

    ‘GUI/UX for everything, absolutely no CLI’ approach

    That’s not a distro thing, it’s a Desktop Environment thing. I personally use GNOME on my daily driver, but I’ve also used Xfce and MATE and gotten away with those. I’d say that GNOME is probably the most “idiot proof,” which is why I use it, but YMMV.

    Linux “requiring the CLI” hasn’t been true for quite a few years now, it just has stuck around for a couple of reasons (imo):

    1. Tutorials/guides/advice about Linux tends to focus on the CLI because it’s easier to figure out someone’s OS and have them copy-paste a command, than to find out the specifics of their graphical setup and walk them through every window and button press.

    2. New users need to know and understand the difference between Kernel, OS, and Desktop Environment to find the answers they’re looking for.

    If you tell Grandma that you installed Linux for her, the first time she tries to figure it out herself, she’s gonna search “how to change volume in Linux” on Google, and she’s going to be bombarded with a thousand answers all saying something different, most telling her to install programs, and most telling her to use the command line. Because Linux is not an operating system, it’s a family of dozens of operating systems that can each be configured thousands of different ways.

    If you tell her “I installed Fedora,” she’s going to run into the same issue, but on a lesser scale. At least there’s only a few hundred different ways on a per-distro basis.

    If you tell her “I installed GNOME,” she will look up “how to change volume in GNOME,” and find her answer. But now you need to explain to her the difference between the three, and when to include that information in her searches, and she will ask “why could I just say ‘how to X in Windows?’ and didn’t have to memorize 3 different names for the same thing that all give me different answers???”

    And yes, your grandma will just call you to ask anyway, but what about when it’s your friend trying to figure it out at 3 am and he can’t get ahold of you?

    Meanwhile, the terminal is (more or less) distro-/DE-agnostic. So their options are to learn more about how is Opperating System formed than they’ll realistically ever need to know, or use the reviled terminal. Such is the plight of DIY OSes.



  • full UI/UX behaviour that behaves almost identical to Windows/Mac

    You want Windows or Mac.

    If you want a computer that you can do stuff like web-browsing, document/spreadsheet/pdf/slideshow editing/creation, gaming, or multimedia processing on, there are distros and utilities on Linux that make those more-or-less easy and beginner-friendly,

    BUT it requires divesting oneself of the habits, behaviors, and paradigms of other operating systems and being willing to learn anew. Community-based Libre software is developed in an entirely different way for an entirely different purpose; because of that, it is nearly impossible to recreate the same software as for-profit proprietary software. One is made by a community hacking together a functional system that suits their needs, the other is made to generate revenue, and thus has to keep users dependent on it by trapping them in dark patterns and igorance of its workings.

    If you just want “Mac or Windows, but free as in beer,” suck it up, pay the devil his due, and buy one of those OSes. Libre Software is an entirely different paradigm, and thus requires a whole paradigm shift before anyone will be happy with it; on-boarding people who aren’t ready to divest themselves of the old paradigm just leads to disgruntled users who blame you for anything wrong with their PC, and creates a market void in the FOSS community ready to be filled by corpo proprietary slopware.



  • Reliable, clear release/support schedule: Debian Stable

    Unlike Fedora Spins, most upstream distros don’t come with a DE pre-packaged, you choose it during the install process (or install a custom one from other sources post-install).

    DEs currently offered by the Debian Installer include: Xfce, LXDE, LXQt, MATE, Lomiri, and of course Plasma and GNOME.

    Not in the installer, but in the repository: Cinnamon, Budgie, Enlightenment, FVWM-Crystal, GNUstep/Window Maker, Sugar, “and possibly others” (according to the wiki).

    You can also do what I do on my less-powerful laptops and just install a window-manager and associated utilities—just make sure to uncheck all DE options during install (you will be forced to use the console until you have a display server and window manager, tho). Right now I’m rocking i3 on my laptops; I would use Sway, but for some reason it’s more resource intensive.

    Other offerings in the repository include: Openbox, Fluxbox, Compiz, Awesome, dwm, Notion, and Wmii

    My personal recs are i3 (and recommended packages), Xfce, or MATE. I’ve used and liked all 3. I still use GNOME for my desktop, but those 3 are what I go with otherwise.


  • If you ain’t scared of the terminal and a little coding, you can use the format FireFox exports to. It’s a single html file with everything stored in one big <DL> with folders stored as headers and a sublist. All the urls and metadata are stored in the tags.

    It should be relatively easy to make shell script that takes some arguments—e.g. action (add or remove), type (folder or link), url, and title—and modifies the file appropriately, programmatically taking metadata like creation/modify time (firefox stores them in unix time) from the system and favico from the website. That way it will work with any firefox anywhere.

    If you want something more full featured, buku looks promising, although I haven’t tried it. For a gui keditbookmarks—which is a standalone, despite being part of KDE—looks like a simple interface that can import/export in firefox, opera, netscape, or Internet Explorer formats, as well as export to a “printable html file” (which I assume meets your definition of “readable”) and should open links in the system’s default browser.


  • I choose to use terminal because I can update my software without requiring a restart (I used Debian btw); for some reason, GNOME’s Software app cannot do this without restarting. I also prefer terminal-based text-editing for coding and scripting.

    Depending on use-case, you can absolutely just use the distro without ever touching the terminal. It requires extra work to sift through all the online advice and docs that center around CLI commands though. The Average Windows User won’t be digging that deep in their system to customize the shit out of it like an Arch user, so they won’t need to touch the stuff that can only be accessed via command line. The Above Average Windows User will already be comfortable with the command prompt anyway.

    Which distribution is the most user-friendly while still updated packages?

    All of them? Why would a distro choose to be hostile to its users? (/s)

    I assume you mean “beginner friendly”? In that case, I would stick to Debian: more stability than windows, harder to break than Arch, and lighter-weight than Fedora.

    Those are the only 3 I’ve daily driven in the past couple of years, and that’s my takeaways. I can’t give informed input on any of the popular derivatives, except Ubuntu which I did use for awhile (back in 2014-2016): it was more prone to breaking shit than Debian, less beginner-friendly too (fuck Snaps, and fuck your Pro subscription data-harvesting up-selling bullshit).


  • Since you’re installing Debian, presumably you’ve done the required reading according to their wiki, and seen the DontBreakDebian page.

    If not, here’s the portion I’m thinking of (emphasis mine)

    Don’t make a FrankenDebian

    Debian Stable should not be combined with other releases carelessly. If you’re trying to install software that isn’t available in the current Debian Stable release, it’s not a good idea to add repositories for other Debian releases.

    First of all, apt-get upgrade default behavior is to upgrade any installed package to the highest available version. If, for example, you configure the forky archive on a trixie system, APT will try to upgrade almost all packages to forky.

    This can be mitigated by configuring apt pinning to give priority to packages from trixie.

    However, even installing few packages from a “future” release can be risky. The problems might not happen right away, but the next time you install updates.

    The reason things can break is because the software packaged for one Debian release is built to be compatible with the rest of the software for that release. For example, installing packages from forky on a trixie system could also install newer versions of core libraries including libc6. This results in a system that is not testing or stable but a broken mix of the two.

    Repositories that can create a FrankenDebian if used with Debian Stable:

    • Debian testing release (currently forky)
    • Debian unstable release (also known as sid)
    • Ubuntu, Mint or other derivative repositories are not compatible with Debian!
    • **Ubuntu PPAs and other repositories created to distribute single applications **

    Some third-party repositories might appear safe to use as they contain only packages that have no equivalent in Debian. However, there are no guarantees that any repository will not add more packages in future, leading to breakage.

    Finally, packages in official Debian releases have gone through extensive testing, often for months, and only fit packages are allowed in a release. On the other hand, packages from external sources might alter files belonging to other packages, configure the system in unexpected ways, introduce vulnerabilities, cause licensing issues.

    Once packages from unofficial sources are introduced in a system it can become difficult to pinpoint the cause of breakage especially if it happens after months.

    I would personally add that this isn’t a case of “if”, but rather “when”. Even if it works at the beginning, all it takes is Mint deciding they want to use a newer library when they update the package you’re using, and suddenly your system won’t boot and there’s no clear, easy solution other than “restore from backup.”

    Even if you know what you’re doing, I would limit tinkering to binaries managed in the $HOME/.local/bin (and any applications that work as package management for that, like cargo, pip or homebrew) or packages that you completely control yourself (such as through git pulls and compiling yourself).

    “Stick to the official repo” is generally the advice I would give for any distro, with the exception of DIY OSes that are intended to be patchwork, like gentoo or Arch.

    THAT BEING SAID: I’m not saying “don’t install without a DE and piece your desired DE together from their parts.” Debian has a lot of DEs, window managers, and their individual parts all in the official repos; a lot of the difference you see between the versions Debian offers and the versions Mint or Ubuntu offer are basically just theming that you can do yourself without altering the system packages.

    If you absolutely must install a 3rd party repo, just understand you are sacrificing Debian’s selling point of stability, and waiving your rights to hold the Debian Maintainers responsible; and when your system breaks (which might not be for many years), it will be entirely your own fault.


  • Since you can run the apps (glitchy as they may be), nothing is wrong with the read perms on the drive. Maybe write perms or exec perms? But ntfs perms don’t map neatly onto POSIX perms, so it’s hard to say. Maybe try setting the gid to the vboxuser group id and see if that helps?

    You might also check out the mount manpage and look at the section about “Generic Mount Options”; this is the more in-depth explanation of the “options” column in fstab, and the defaults option (which depends on the distro) can hide stuff like nouser, which prevents users from mounting the drive.

    Finally, look into ACLs and how to manage those for interoperability across Windows and POSIX systems.

    Best case scenario, fixing it so your user has all access to the drive with the user,exec options fixes your issues. Otherwise, you’ve just gotta do the learning about ACLs and POSIX perms