Sleep and hibernate don’t work for me.
Hibernate just acts like a power loss. After shutting down the state is just lost and the laptop starts up with a fresh boot.
With Modern Sleep, kernels 6.11+ go to sleep fine, but don’t manage to wake back up. The keyboard lights up for half a minute, the fan goes on, the screen stays dark and after half a minute the laptop goes back to sleep. Kernel 6.10 sometimes works, sometimes behaves like 6.11+. I’d say it works 80% of the time.
I disabled Modern Sleep in BIOS and tried to enable S3, S2 and S2+S3 in BIOS instead. I set the corresponding sleep states in Linux as well, and no matter which one of the non-modern-sleep options I try, and no matter if I’m using kernel 6.10 or 6.15, it never manages to wake up (same symptoms as above).
It’s been a while since I was a student. When I was in university, I actually had one project that was then used in a real world context for at least a few years - by university students.
But most of my colleagues never did anything like that. Here are a few reasons why (and also why I didn’t do more):
Making an actual useful real-life application is hard. You quickly get into things like security, device-model-specific bugs, support for users using the thing wrong, because you have no idea to do proper UX and so on.
Moving from making toy prototypes to real programs is not simple at all.
University teaches you stuff, but only the very broad-strokes basics. When you get your first real job you are usually in a team with some more senior people and they can help you move the university basics into a real-world context.
Most people can’t afford putting hundreds or thousands of hours of work into a project without anyone paying the bills.
Most people have to finance stuff like rent themselves. They are already balancing university, work and life. There’s not a ton of time left for a third thing.
Every programmer I know has more ideas than they will ever finish in their entire life. And every programmer has a bunch of MBA people who tell them every time they meet about their amazing app idea (“You know, an app where you can buy things, but you buy by swiping right! It’s going to be the next Amazon! If you implement it, you can have 10% of the profits!”).
If anything, having too many ideas leads to switching your side projects once your last side project becomes too annoying.
The analogy is that on the one hand you have a corporation where you know who they are, where you know which laws they are governed by, where you know how to file a privacy complaint, where you know who to sue in case something goes wrong. And you don’t trust them.
Instead you choose to trust some rando from the internet. Where anyone with a sane mind knows they will get screwed over.
I’'d argue changing who can see your data from either a large group to a smaller one or one you do trust vs one you do not trust precisely is protecting your privacy.
It’s always astounding to me that people put more trust in an intangible rando from the internet than into organizations governed by law. Like those people who don’t accept mainstream medicine but eat random supplements they imported from India by the kilogram.
Also FWIW you can host your VPN, you do not have to rely on a commercial VPN provider.
Sure you can. And where does that traffic go?
If you e.g. host a VPN in your home network and you connect to it from your phone, and then you use this connection to access the internet, then your traffic will just be visible to your home network’s ISP instead of your phone’s ISP.
As I said, it doesn’t protect, it changes who can see the data.
Your ISP might not be able to see it, but your VPN provider will instead. VPN providers are hardly ever under any kind of regulation, except those run by secret services, of which there are many.
And there are more than enough VPNs that sell customer data while claiming to be amazing for your privacy.
It’s hard to test the whole system with all special tests manually. At least if your project is more than a static website or something similarly trivial.
That’s why auto tests are there to increase your testing coverage, so that one change won’t break your system in unexpected ways, especially if you do system-wide changes like upgrading your framework or core systems to a new version.
Tests can be messed up just like anything else can be messed up. Doesn’t mean that the concept itself is flawed.
If you only do things that people cannot mess up, then you’ll quickly end up not doing anything at all.
The biggest benefit to me that testing has is when refactoring. If I have decent test coverage and I change something major, tests help me to see if I accidentally broke something on the way, which is especially helpful if I am touching ancient code written by someone who left the company years ago.
I got myself an old EEE PC for exactly that purpose. (Except, substitute python with lua).
8h battery life, cost me €20 and does what it’s supposed to. Just make sure you get one with an Atom N280 or better. The popular N270 is 32bit only, and more and more programs are dropping 32bit support. Some of them you can DIY compile for 32bit, some you really don’t want to.
(For example, compiling Node on an Atom N270 takes around 3 days.)
I had one with an N270 first and replaced it with one with an N450 to get 64bit.
Maxed it out with 2GB RAM, a cheapo €10 SSD that maxes out SATA and overclocked it to 2GHz.
It’s not fast by any stretch of the imagination, but it’s totally ok for editing text files with Kate and compiling with platformio.