Evening y’all
I’ll try to keep it brief, I need to move my reverse proxy (traefik) to another machine and I’m opting to utilize Docker Swarm for the first time this way I’m not exposing a bunch of ports on my main server over my network, so ideally I’d like to have almost everything listening on local host while traefik does it’s thing in the background
Now I gotta ask, is Docker Swarm the best way to go about this? I know very little about Kubernetes and from what I’ve read/watched it seems like Swarm was designed for this very purpose however, I could be entirely wrong here.
What are some key changes that differ typical Compose files from Swarm?
Snippet of my current compose file:
services:
homepage:
image: ghcr.io/gethomepage/homepage
hostname: homepage
container_name: homepage
networks:
main:
ipv4_address: 172.18.0.2
environment:
PUID: 0 # optional, your user id
PGID: 0 # optional, your group id
HOMEPAGE_ALLOWED_HOSTS: MY.DOMAIN,*
ports:
- '127.0.0.1:80:3000'
volumes:
- ./config/homepage:/app/config # Make sure your local config directory exists
- /var/run/docker.sock:/var/run/docker.sock #:ro # optional, for docker integrations
- /home/user/Pictures:/app/public/icons
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.homepage.rule=Host(`MY.DOMAIN`)"
- "traefik.http.routers.homepage.entrypoints=https"
- "traefik.http.routers.homepage.tls=true"
- "traefik.http.services.homepage.loadbalancer.server.port=3000"
- "traefik.http.routers.homepage.middlewares=fail2ban@file"
traefik:
image: traefik:v3.2
container_name: traefik
hostname: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
main:
ipv4_address: 172.18.0.26
ports:
# Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- target: 80
published: 55262
mode: host
# Listen on port 443, default for HTTPS
- target: 443
published: 57442
mode: host
environment:
CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
# CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
secrets:
- cf_api_token
env_file: .env # use .env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config/traefik/traefik.yml:/traefik.yml:ro
- ./config/traefik/acme.json:/acme.json
# - ./opt:/opt
#- ./config/traefik/config.yml:/config.yml:ro
- ./config/traefik/custom-yml:/custom
# - ./config/traefik/homebridge.yml:/homebridge.yml:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.MY.DOMAIN`)"
#- "traefik.http.middlewares.traefik-ipallowlist.ipallowlist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 208.118.140.130, 172.18.0.0/16"
#- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.MY.DOMAIN`)"
#- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
- "traefik.http.routers.traefik-secure.tls.domains[0].main=MY.DOMAIN"
- "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.MY.DOMAIN"
- "traefik.http.routers.traefik-secure.service=api@internal"
- "traefik.http.routers.traefik.middlewares=fail2ban@file"
networks:
main:
external: true
ipam:
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
I censored out my actual domain with MY.DOMAIN
so if that confuses people i apologize.
Update:
So, I’ve came across an application called Traefik-Kop which essentially allows for swarm like communication between traefik and two docker engines.
This isn’t full-proof as I do have to expose ports over the main server however, this was the simplest way of achieving what I was going for.
I want to say thank you to everyone who has commented I haven’t had much time to respond to comments here but I have read them all, y’all’s insight is much appreciated!
Update 2:
People here suggest Pangolin however, I just spent the last 3 hours trying to integrate pangolin with the Traefik instance that I already have setup, it was not fun, i couldn’t figure out how Pangolin is able to communicate with Traefik if it doesn’t expose any ports or define docker labels, once I figured out Pangolins web-ui runs on 3002:3002
I was able to reverse proxy it however, when attempting to login I kept running into 404 errors.
I’ll give it another go when I’m no longer frustrated with it as it does seem like the best route for me to take.
Well first off swarm doesn’t work with environment variables, so if you pass any in you’re going to need to pipe the output of docker compose read into docker swarm service create.
Your port settings are gonna give it a problem too, swarm doesn’t support that new syntax, and as a result you can only assign a single network interface to a service.
Regarding networking, since the whole paradigm is that you’re not defining a single container but a service that can live/move across multiple nodes; any traffic to any node in your swarm will be routed (round robin style) across the copies of that service. (This makes logging setup a PITA, ask me how I know!)
Bind mounts aren’t recommended, volumes are preferred. Otherwise everything needs to be mirrored across all nodes, depends on the use case.
That being said I’m not convinced that swarm is the right answer here, I concur with @talentedkiwi@sh.itjust.works. You should just install pangolin on your second machine.
I ran swarm in a homelab and ended up switching back. I don’t remember all the details I had issues with, but be aware of quorum. Here is the link to high availability docs. If one of the nodes goes down then you can’t do anything with the other. I also had issues getting everyone back online when one went down (with only two). I had three nodes, but one failed and I didn’t replace it. If one of the remaining two went offline I had to manually setup the swarm again each time. I found it to be a hassle because I didn’t have enough need for multiple nodes and high availability.
I now use Pangolin (Underlying traefik) on a VPS which VPNs back into my home where I host the sites. I have the VPN on it’s own proxmox container in the same VLAN as my servers.
I’ve worked with Swarm in a startup setting. It was an absolute nightmare. We eventually gave up and moved to Kubernetes.
That said, your use case does sound simpler. As I recall, we had to set up service discovery (with Hashicorp Consul) and secret management (with Hashicorp Vault) ourselves. I believe we also used Traefik for load balancing. There were other components as well, but I don’t remember it all. This was over 5 years ago, though.
The difficulty wasn’t configuring each piece but getting them to work together. There was also the time burned learning all the different tools. Kubernetes is great because everything is meant to work together.
But if it’s just two machines with separate configuration, do you even need orchestration? Is there a lot of overhead to just manage them individually?
Unfortunately, it was too long ago to remember the details of differences between compose and swarm. I do remember it was a very trivial conversion.
I have used Docker Swarm in my homelab for years without big issues, you just have to be aware of its limitations. For example, I use SWAG for my reverse proxy and it works better as a compose deployment on an individual docker node because then it can identify incoming IPs. All of the backend communication runs on internal networks, which helps isolate them.
I like using Swarm at home because it is simple and easy while providing good scalability and security (yes, I know podman would be more secure, but I haven’t taken that plunge yet).
That being said, Docker Swarm isn’t used in the industry much. So if you are looking to expand on your IT skills, K8s is the way.