This quote from the article very much sums up my own experience of Claude:
In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.
AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.
I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.
From my experience working with people who use them heavily, they introduce new ways of accumulating tech debt. Those projects usually end up having essays of feature spec docs, prompts, state files (all in prose of course), etc. Those files are anywhere from hundreds to thousands of lines long, and there’s a lot of them. There’s no way anybody is spending hours reading through enough markdown to fill twenty encyclopedia-sized books just to make sure it’s all up-to-date. At least, I can promise that I won’t be doing it, nor will anyone I know (including those using AI this way).
These might be of interest to software developers but it’s all just style nothing here actually effects the computation. The problem I encounter with LLMs is that they are incapable of doing anything but rehearsing the same algorithms you get off of blogs. I can’t even successfully force them to implement a novel algorithm they will simply deny that it is valid and revert back to citing their training data.
I don’t see LLMs actually furthering the field in any real way ( even if by accident, since they can’t actually perform deductive reasoning).
I am interested to see if these tools can be used to tackle tech debt
Having it rewrite existing functioning code seems like a terrible idea. QA would at least have to re-test all functionality.



