Many companies are tying monitored AI usage to performance evaluations. So yeah, we’re going to be ”willing” under that pretext.
Finding one senior dev who is willing to debug a multi-thousand line application produced by genAI…they’re going to be reluctant, at best, because the code is slop.
MBAs and C-suites keep trying to manufacture consent for this tech so their stock portfolios outperform, and the madmen are willing to sacrifice your jobs to do it!
I’m a slow adopter of new technologies like
AILLMs. My reasoning is that if it turns out to actually be a good product, then it will eventually prove itself, and the early adopters can be the “beta testers” so to speak. But if it turns out to be a bad product, then I won’t have wasted my time on something that isn’t worthwhile.Maybe a day comes when I start using these tools, but they clearly just aren’t all that useful in their current form. In all honesty, I’m pretty sure that they will never be useful enough for me to consider them worth learning, but definitely not so today.
I’m interested to see in 5 years or so, once all the hyper-hype is hopefully subsides, what actual uses remain and how they look.
Yeah if anything all the people screeching that you have to adopt now or you’ll be replaced by those that do just destroy their credibility.
Agreed. To make it a bit more general, whenever I see people claiming to be able to predict the future with absolute certainty and confidence, that to me is just a sign they are idiots and shouldn’t be listened to. Definitely had a lot of those in past companies I have worked in. A lot of the time, they’re trying to gaslight people into believing in their version of the future so they can sell us garbage (products, stock price, etc.). They’ll always get some fools to believe them of course.
They talk like there’s training needed, that it’s some learned skill. It’s just a means to blame the worker instead of the AI for not boosting productivity.
The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code.
Not surprising at all. When you write code, you’re actually thinking about it. And that’s valuable context when you’re debugging. When you just blindly follow snippets you got from some random other place, you’re not thinking about it and you don’t have that context.
So it’s easy to see how this could lead to a net productivity loss. Spend more time writing it yourself and less time debugging, or let something else write it for you quickly, but spend a lot of time debugging. And on top of it all, no consideration of edge cases and valuable design requirement context can also get lost too.
I also always find that outsourcing is risky, whether it’s to other devs or to some AI, because it requires that you understand the problem in whole upfront. In 99% of cases, when I’m implementing something myself, I will run into some edge case I had not considered before and where an important decision has to be made. And well, a junior or LLM is unlikely to see all these edge cases and to make larger decisions, that might affect the whole codebase.
I can try to spend more time upfront to come up with all these corner cases without starting on the implementation, but that quickly stops being economic, because it takes me more time than when I can look at the code.
I would imagine SO has seen a majorly significant drop in site traffic in the past few years
Developers remain willing but reluctant
Management: “Maybe we’re not pushing hard enough”
I have caught myself genuinely thinking that management needs to unearth more budget, if they so desperately want us to use these AI tools, so that we can onboard another person to compensate for the productivity hit.
Unfortunately, they believe the opposite to be true, that we just need to use AI tools and then our productivity will be through the roof…




