Docker docs:

Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    4 months ago

    It’s my understanding that docker uses a lot of fuckery and hackery to do what they do. And IME they don’t seem to care if it breaks things.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      4 months ago

      To be fair, the largest problem here is that it presents itself as the kind of isolation that would respect firewall rules, not that they don’t respect them.

      People wouldn’t make the same mistake in NixOS, despite it doing exactly the same.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      4 months ago

      I dont really understand the problem with that?

      Everyone is a script kiddy outside of their specific domain.

      I may know loads about python but nothing about database management or proxies or Linux. If docker can abstract a lot of the complexities away and present a unified way you configure and manage them, where’s the bad?

    • LordKitsuna@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      4 months ago

      That is definitely one of the crowds but there are also people like me that just are sick and tired of dealing with python, node, ruby depends. The install process for services has only continued to become increasingly more convoluted over the years. And then you show me an option where I can literally just slap down a compose.yml and hit “docker compose up - d” and be done? Fuck yeah I’m using that

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      No it’s popular because it allows people/companies to run things without needing to deal with updates and dependencies manually

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Another take: Why should I care about dependency hell if I can just spin up the same service on the same machine without needing an additional VM and with minimal configuration changes.

  • ohshit604@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 months ago

    This post inspired me to try podman, after it pulled all the images it needed my Proxmox VM died, VM won’t boot cause disk is now full. It’s currently 10pm, tonight’s going to suck.

      • ohshit604@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        Okay so I’ve done some digging and got my VM to boot up! This is not Podman’s fault, I got lazy setting up Proxmox and never really learned LVM volume storage, while internally on the VM it shows 90Gb used of 325Gb Proxmox is claiming 377Gb is used on the LVM-Thin partition.

        I’m backing up my files as we speak, thinking of purging it all and starting over.

        Edit: before I do the sacrificial purge This seems promising.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 months ago

    This only happens if you essentially tell docker “I want this app to listen on 0.0.0.0:80”

    If you don’t do that, then it doesn’t punch a hole through UFW either.

  • steventhedev@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    4 months ago

    You’re forgetting the part where they had an option to disable this fuckery, and then proceeded to move it twice - exposing containers to everyone by default.

    I had to clean up compromised services twice because of it.

  • Harbinger01173430@lemmy.worldBanned
    link
    fedilink
    arrow-up
    10
    ·
    4 months ago

    Nat is not security.

    Keep that in mind.

    It’s just a crutch ipv4 has to use because it’s not as powerful as the almighty ipv6

  • jwt@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    4 months ago

    Somehow I think that’s on ufw not docker. A firewall shouldn’t depend on applications playing by their rules.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      ufw just manages iptables rules, if docker overrides those it’s on them IMO

      • jwt@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        Feels weird that an application is allowed to override iptables though. I get that when it’s installed with root everything’s off the table, but still…

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Linux lets you do whatever you want and that’s a side effect of it, there’s nothing preventing an app from messing with things it shouldn’t.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            there’s nothing preventing an app from messing with things it shouldn’t.

            that’s not exactly a linux specialty

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Not really.

        Both docker and ufw edit iptables rules.

        If you instruct docker to expose a port, it will do so.

        If you instruct ufw to block a port, it will only do so if you haven’t explicitly exposed that port in docker.

        Its a common gotcha but it’s not really a shortcoming of docker.

      • pressanykeynow@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        iptables is deprecated for like a decade now, the fact that both still use it might be the source of the problem here.

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    4 months ago

    Ok

    So, confession time.

    I don’t understand docker at all. Everyone at work says “but it makes things so easy.” But it doesnt make things easy. It puts everything in a box, executes things in a box, and you have to pull other images to use in your images, and it’s all spaghetti in the end anyway.

    If I can build an Angular app the same on my Linux machine and my windows PC, and everything works identically on either, and The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed, how is that not simpler than everything you need to do to set up a goddamn container?

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 months ago

      This is less of an issue with JS, but say you’re developing this C++ application. It relies on several dynamically linked libraries. So to run it, you need to install all of these libraries and make sure the versions are compatible and don’t cause weird issues that didn’t happen with the versions on the dev’s machine. These libraries aren’t available in your distro’s package manager (only as RPM) so you will have to clone them from git and install all of them manually. This quickly turns into hassle, and it’s much easier to just prepare one image and ship it, knowing the entire enviroment is the same as when it was tested.

      However, the primary reason I use it is because I want to isolate software from the host system. It prevents clutter and allows me to just put all the data in designated structured folders. It also isolates the services when they get infected with malware.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        4 months ago

        Ok, see the sandboxing makes sense and for a language like C++ makes sense. But every other language I used it with is already portable to every OS I have access to, so it feels like that defeats the benefit of using a language that’s portable.

    • sidelove@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      4 months ago

      have to make sure of is that the deployment environment has node and the angular CLI installed

      I have spent so many fucking hours trying to coordinate the correct Node version to a given OS version, fucked around with all sorts of Node management tools, ran into so many glibc compat problems, and regularly found myself blowing away the packages cache before Yarn fixed their shit and even then there’s still a serious problem a few times a year.

      No. Fuck no, you can pry Docker out of my cold dead hands, I’m not wasting literal man-weeks of time every year on that shit again.

      (Sorry, that was an aggressive response and none of it was actually aimed at you, I just fucking hate managing Node.js manually at scale.)

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      Sure but thats an angular app, and you already know how to manage its environment.

      People self host all sorts of things, with dozens of services in their home server.

      They dont need to know how to manage the environment for these services because docker “makes everything so easy”.

    • llii@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      You’re right. As an old-timey linux user I find it more confusing than running the services directly, too. It’s another abstraction layer that you need to manage and which has its own pitfalls.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      The only thing I really have to make sure of is that the deployment environment has node and the angular CLI installed

      That’s why Docker is popular. Making sure every single system running your app has the correct versions of node and angular installed is a royal pain in the butt.

    • miss phant@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      I put off docker for a long time for similar reasons but what won me over is docker volumes and how easy they make it to migrate services to another machine without having to deal with all the different config/data paths.

    • FuckBigTech347@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      I pretty much share the same experience. I avoid using docker or any other containerizing thing due to the amount of bloat and complexity that this shit brings. I always get out of my way to get Software running w/o docker, even if there is no documented way. If that fails then the Software just sucks.

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Even when it seems like an app runs identically on every platform, you can easily run into issues down the road. If you have a well configured docker image, that issue is just solved ahead of time. Hell, I find it worth messing with just moving a node.js app between Linux boxes, which would experience the least issues I can think of.

  • skuzz@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    4 months ago

    For all the raving about podman, it’s dumb too. I’ve seen multiple container networks stupidly route traffic across each other when they shouldn’t. Yay services kept running, but it defeats the purpose. Networking should be so hard that it doesn’t work unless it is configured correctly.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      I also ended up using firewalld and it mostly worked, although I first had to change some zone configs.

  • purplemonkeymad@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    Well yea ofc it works like that, the services are not on the same network, so the packets need to be sent onto another adapter. That means either nat or forwarding tables.

    Now if that was a good design of docker is another question.