I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
I feel like it’s not that bad if you use it for small things, like single lines instead of blocks of code, like a glorified auto complete.
Sometimes it’s nice to not use it though because it can feel distracting.
truly who could have predicted that a glorified autocomplete program is best at performing autocompletion
seriously the world needs to stop calling it “AI”, it IS just autocomplete!
Apparently Claude sonnet 3.7 is the best one for coding
I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.