• 0 Posts
  • 27 Comments
Joined 7 months ago
cake
Cake day: June 12th, 2025

help-circle


  • I enjoyed my brief time with gRPC and protobuf, when I was trying them out, and I’d happily use them again, but I think it’s obvious why JSON and REST are more dominant.

    For one, gRPC works great for simple server/client apps, but it makes them VERY tightly coupled. If your product is an API itself, open for public use, tight coupling is a hindrance more than a benefit.

    And more to the point… have you ever tried to DEBUG gRPC traffic? The efficiency of gRPC is top notch, t optimizes for serialization, deserialization, and bandwidth, all at once. But JSON is human readable, and when you throw in HTTP compression, the efficiency is close enough to protobuf to serve the vast majority of scenarios. JSON is just far more practical.


  • Unfortunately, the alternatives are really lacking. JetBrains Rider REALLY feels underbaked. No deal-breaking issues, but lots of little low-impact ones, and lots of design decisions that go against common conventions, for no apparent reason. The “Visual Studio Mode” doesn’t really help.

    On top of that, I’ve had several issues with RUNNING Rider, on account of being on Bazzite, an immutable distro. It was fine on Mint, but Mint had its own troubles with my NVidia card.


  • Not QUITE a program, but I’d have to say my own little GBA ROM hacks for the original Fire Emblem. On account of the following story…

    IIRC, it was 2007, and I was a senior in high school, reorganizing some of the stuff for the robotics team, in the cabinets in the big science classroom where we met. There were some freshmen interested in the team (season wouldn’t start for a while yet) who’d taken to hanging out there, after school.

    They all had laptops and I recognized the menu theme when one of them pulled up Fire Emblem in an emulator, from across the room, and immediately called out “Who’s playing Fire Emblem?”. When I went over and saw he was using Virtual Boy Advance, it occurred to me what I had in my pocket. Or rather what happened to be ON the flash drive in my pocket.

    At the time, I didn’t have my own laptop, so my flash drive had years worth of random crap on it. And over the years, I spent a LOT of time tinkering with ROMs and VBA over the years. In addition to a few copies of different hacked ROMs and save files, I had a portable hex editor, and a LOT of text files with hex tables and memory maps and other research I’d collected over the years.

    So, yeah, I pulled out the flash drive, said “Wanna see something cool?” and proceeded to apply many crazy hacks as I could think of, in the most obtuse manner possible, just editing hex values directly in memory as the game was running. Free XP, free items, end game equipment, sprite swaps, etc. At one point, one of them says something like “What kind of wizard ARE you?!”

    It’s what comes to mind for me when you say “cool” because I like to think I inspired those kids to get into software and programming themselves, or at least consider it as an option. They certainly stuck around with the team for the rest of the year. Also, it inspired ME to really realize how much I’d grown just by tinkering and being curious, and how much you can accomplish through incremental effort.











  • I’d say “Separation of Responsibilities” is probably my #1. Others here have mentioned that you shouldn’t code for future contingencies, and that’s true, but a solid baseline of Separation of Responsibilities means you’re setting yourself up for future refactors without having to anticipate and plan for them all now. I.E. if your application already has clear barriers between different small components, it’s a lot easier to modify just one or two of them in the future. For me, those barriers mean horizontal layers (I.E. data-storage, data-access, business logic, user-interfacing) and vertical slicing (I.E. features and/or business domains).

    Next, I’ll say “Self-Documenting Code”. That is, you should be able to intuit what most code does by looking at how it’s named and organized (ties into separation of responsibilities from above). That’s not to say that you should follow Clean Code. That takes the idea WAY too far: a method or class that has only one call site is a method or class that you should roll into that call site, unless it’s a separation of responsibility thing. That’s also not to say that you should never document or comment, just that those things should provide context that the code doesn’t, for things like design intent or non-obvious pitfalls, or context about how different pieces are supposed to fit together. They should not describe structure or basic function, those are things that the code itself should do.

    I’ll also drop in “Human Readability”. It’s a classic piece of wisdom that code is easier to write than it is to read. Even of you’re only coding for yourself, if you want ANY amount of maintainability in your code, you have to write it with the intent that a human is gonna need to read and understand it, someday. Of course, that’s arguably what I already said with both of the above points, but for this one, what I really mean is formatting. There’s a REASON most languages ignore most or all whitespace: it’s not that it’s not important, it’s BECAUSE it’s important to humans that languages allow for it, even when machines don’t need it. Don’t optimize it away, and don’t give control over when and where to use it to a machine. Machines don’t read, humans do. I.E. don’t use linters. It’s a fool’s errand to try and describe what’s best for human readability, in all scenarios, within a set of machine-enforceable rules.

    “Implement now, Optimize later” is a good one, as well. And in particular, optimize when you have data that proves you need it. I’m not saying you should intentionally choose inefficient implementations just because they’re simpler, but if they’re DRASTICALLY simpler… like, is it really worth writing extra code to dump an array into a hashtable in order to do repeated lookups from it, if you’re never gonna have more than 20 items in that array at a time? Even if you think you can predict where your hot paths are gonna be, you’re still better off just implementing them with the KISS principal, until after you have a minimum viable product, cause by then you’ll probably have tests to support you doing optimizations wolithout breaking anything.

    I’ll also go with “Don’t be afraid to write code”, or alternatively “Nobody likes magic”. If I’m working on a chunk of code, I should be able to trace exactly how it gets called, all the way up to the program’s entry point. Conversely, if I have an interface into a program that I know is getting called (like, say, an API endpoint) I should be able to track down the code it corresponds to bu starting at the entry point and working my way down. None of this “Well, this framework we’re using automatically looks up every function in the application that matches a certain naming pattern and figures out the path to map it to during startup.” If you’re able to write 30 lines of code to implement this endpoint, you can write one more line of code that explicitly registers it to the framework and defines its path. Being able to definitively search for every reference to a piece of code is CRITICAL to refactoring. Magic that introduces runtime-only references is a disaster waiting to happen.

    As an honorable mention: it’s not really software design, but it’s somethign I’ve had to hammer into co-workers and tutorees, many many times, when it comes to debugging: “Don’t work around a problem. Work the problem.”. It boggles my mind how many times I’ve been able to fix other people’s issues by being the first one to read the error logs, or look at a stack trace, or (my favorite) read the error message from the compiler.

    “Hey, I’m getting an error ‘Object reference not set to an instance of an object’. I’ve tried making sure the user is logged in and has a valid session.”

    “Well, that’s probably because you have an object reference that’s not sent to an instance of an object. Is the object reference that’s not set related to the user session?”

    “No, it’s a ServiceOrder object that I’m trying to call .Save() on.”

    “Why are you looking at the user session then? Is the service order loaded from there?”

    “No, it’s coming from a database query.”

    “Is the database query returning the correct data?”

    “I don’t know, I haven’t run it.”

    I’ve seen people dance around an issue for hours, by just guessing about things that may or may not be related, instead of just taking a few minutes to TRACE the problem from its effect backwards to its cause. Or because they never actually IDENTIFIED the problem, so they spent hours tracing and troubleshooting, but for the wrong thing.



  • The standard answer is that the odds of the first roll don’t change the odds of the second roll, the second roll still has a 1/20 chance of a 1, no matter what the first roll is.

    The more thorough answer is that it’s a misunderstanding of what probabilities are. Yes, there’s a 1/400 chance of rolling 2 1s, but by the time you roll the first die and get a 1, you’re not talking about that problem anymore. You’ve introduced new information to the problem, and thus have to change your calculation. There’s a 1/20 chance of rolling 2 1s after you’re already rolled one. Let’s calculate it…

    So, there’s 400 ways 2 dice can fall, yes, and there’s only 1 way that they can both fall on 1. However, there’s 20 ways that the first die can fall on 1, one for each possible fall of the second die. So, when we say that that has already happened, we have to eliminate 380 of those 400 die rolls, those are no longer possible. That leaves us with only 20 ways that the second die can fall, and only 1 of those is a 1. So the odds of rolling a on the second die, after already rolling a 1 on the first die is 1/20.

    We can also calculate it differently. What are the odds of the second die falling on 1? Cause that’s the one we care about, really. And there’s 20 ways that can happen, one for each possible fall of the first die. So the odds of the second die falling on 1, when rolling 2 dice is 20/400, or 1/20.



  • These all sound like good improvements to WASM as a binary target, but… how do we STILL not have access to any kind of I/O? How is that not the #1 priority? No access to the DOM, no access to local storage, no access to networking… WASM will continue to be borderline useless until it can actually do the things an application needs to do, without having to implement some hackjob JS interop layer.