• 8 Posts
  • 35 Comments
Joined 7 months ago
cake
Cake day: June 18th, 2025

help-circle




  • relic4322@lemmy.mltoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    7 months ago

    There is so so so much, and they do get caught, and when they do we keep a peek into how invasive they are. As someone who has had to worry about being targeted by intelligence agencies and nation-states, I was completely blindsided by corporate/capitalist surveillance.

    for example, look at this action by Meta, where they broke out of security sandboxes and exploited protocols in order to tie your browsing history (even private browsing) back to your identify saved in their databases back in meta land

    https://www.theregister.com/2025/06/03/meta_pauses_android_tracking_tech/

    the amount of data that is being harvested and sold, and resold, is absurd, and the greater threat is not just that they are exploiting you, its that they dont care who the data gets sold to. Bad actors (criminals, etc) can and will purchase information they can use against you.

    So, consider the unintentional ramifications of all that info being harvested and available in addition to the intentional ramifications of hyper greed, and couple that with the amount of available compute and you will see that you do not need to be a person of interest, everyone is a data point that can be and will be exploited.

    I would encourage everyone to take their privacy seriously.




  • There is a lot, and there are a lot of levels. I am working on this now as well. Escalating from where I was, its a learning process. Too much to type in a single comment/response.

    If you would like more info on removing your info from the internet, reducing the amount of spyware on your android phone, de-googling yourself, or limiting how much info you spill while you browse, we can connect and I can share what I have been doing. Ive got plenty I still need to do beyond this, but I am happy to share my lessons learned as it were.











  • Ton of comments, and I havent read them all, but I wanted to ask if you really meant popular or if you wanted something for a specific reason. Easy for new ppl to linux, good for desktops, etc etc.

    I dont really use GUIs on linux, except for when I want to have a fancy pants riced network monitor type situation. I am a big fan of NixOS except for python Dev stuff. Big fan of being able to clone a machine or recover a machine with a single conf file.




  • You are right. It’s the choice I’ve made. I’m decided that I would rather have the lock down because I no longer think that being anonymous means anything. It’s my opinion that due to the rise and ease of apply AI/ML and computational access we are all data points. So it’s no longer a matter of blending in.

    TLDR, I weighed the two and chose this




  • sure thing, here you are

    services:
      pihole:
        container_name: pihole
        image: pihole/pihole:latest
        ports:
          # DNS Ports
          - "53:53/tcp"
          - "53:53/udp"
          # Default HTTP Port
          - "8082:80/tcp"
          # Default HTTPs Port. FTL will generate a self-signed certificate
          - "8443:443/tcp"
          # Uncomment the below if using Pi-hole as your DHCP Server
          #- "67:67/udp"
          # Uncomment the line below if you are using Pi-hole as your NTP server
          #- "123:123/udp"
        environment:
          # Set the appropriate timezone for your location from
          # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones, e.g:
          TZ: 'America/New_York'
          # Set a password to access the web interface. Not setting one will result in a random password being assigned
          FTLCONF_webserver_api_password: 'false cat call cup'
          # If using Docker's default `bridge` network setting the dns listening mode should be set to 'all'
          FTLCONF_dns_listeningMode: 'all'
          FTLCONF_dns_upstreams: '127.0.0.1#5335' # Unbound
        # Volumes store your data between container upgrades
        volumes:
          # For persisting Pi-hole's databases and common configuration file
          - './etc-pihole:/etc/pihole'
          # Uncomment the below if you have custom dnsmasq config files that you want to persist. Not needed for most starting fresh with Pi-hole v6. If you're upgrading from v5 you and have used this directory before, you should keep it enabled for the first v6 container start to allow for a complete migration. It can be removed afterwards. Needs environment variable FTLCONF_misc_etc_dnsmasq_d: 'true'
          #- './etc-dnsmasq.d:/etc/dnsmasq.d'
        cap_add:
          # See https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
          # Required if you are using Pi-hole as your DHCP server, else not needed
          - NET_ADMIN
          # Required if you are using Pi-hole as your NTP client to be able to set the host's system time
          - SYS_TIME
          # Optional, if Pi-hole should get some more processing time
          - SYS_NICE
        restart: unless-stopped
      unbound:
        container_name: unbound
        image: mvance/unbound:latest # Change to use 'mvance/unbound-rpi:latest' on raspberry pi
        # use pihole network stack
        network_mode: service:pihole
        volumes:
          # main config
          - ./unbound-config/unbound.conf:/opt/unbound/etc/unbound/unbound.conf:ro
          # custom config (unbound.conf.d/your-config.conf). unbound.conf includes these via wilcard include
          - ./unbound-config/unbound.conf.d:/opt/unbound/etc/unbound/unbound.conf.d:ro
          # log file
          - /srv/docker/pihole-unbound/unbound/etc-unbound/unbound.log:/opt/unbound/etc/unbound/unbound.log
        restart: unless-stopped
    

    I am relatively new to docker as well tbh. I did a lot with virtualization and a lot with linux and never bothered, but I totally get the use case now ha. just an FYI, if you use docker on Windows it runs slower as it has to leverage the Windows subsystem Linux (WSL) and a slightly different docker engine (forget which one). So linux is your best bet. If you do want to use a full VM I found Qemu to be the best option for least resource usage.