If so, I’d like to know about that questions:
- Do you use an code autocomplete AI or type in a chat?
- Do you consider environment damage that use of AIs can cause?
- What type of AI do you use?
- Usually, what do you ask AIs to do?
No
I don’t.
I played around with it twice, but both times it gave me nonfunctioning code. It seemed stupid to use it when I’d still have to go back and rewrite it anyway.
No, I don’t. I often have to fix the work of my colleague and my boss, who do use it. I often have to gently point out to my boss that just because the chatbot outputs results for things, doesn’t mean those results are accurate or helpful.
I use AI as a rubber duck, to compliment the rubber ducks on my desk when they don’t give enough feedback. So it’s use is mostly conceptual, I find that models that provide “thinking” output perhaps more useful than whatever its actual answer is because it asks questions about edge cases I might not have considered.
As for code generation, I hate it. It outputs garbage, forgets things, hallucinates, and whatever thing it writes I’ll have to rewrite anyway to actually make it compile.
As I’m fairly isolated at work I think it makes a good pair programmer partner, so to speak. Offering suggestions that I can take into consideration and research heavily if I think it’s a good one.
My answer (OP): I use AI for short and small questions, like things I already know but I forgot, like “how to sort an array”, or about Linux commands, which I can test just in time or read the man page to make sure it works as intended.
I consider my privacy and environment, so I use a local AI (16b) for most of my questions, but for more complex things that I really need any possible help I use Deep Seek Coder v3.1 (671b) in the cloud via ollama.
I don’t use autocomplete code because it annoys me and don’t let me think about the code, I like to ask when I think I need it.
- I don’t use AI code autocomplete. It was giving me nonsense and interrupts my thought process when I write code. Standard non-AI autocomplete is much better. I tried to use chat to generate medium size logic (up to 100 lines). Mostly it does not work or refining prompt takes more time than writing code myself so I stopped using it for medium sized tasks. I use it for small tasks up to 20 lines where I need an example of how to use specific API. What it does well is generating test cases (not tests themselves). I once tried to summarize set of made up requirements (can elaborate if anyone interested in), it instantly gave me idea of how far we are from AGI as it failed miserably.
- I do not consider for my usage since I use it maybe once-twice a week on average. But generally, I think it’s a huge waste of resources, not only natural but financial and human
- Claude 4 sonnet at work. Mistral for personal curiosity episodes
- I already covered work part. For personal, mostly “searching” random info I couldn’t find via DDG or offloading social rituals such as congratulations
I use AI as a sort of junior developer, I know the problem domain but a bit to lazy to write all the code. I develop on a remote Linux VM with tmux, nvim and opencode. Have the ai tmux session and my development session on a different project. Make sure to have a clean git tree and then I detach from one session to the ai session and check the progress.
The ai makes mistakes so a cautious review of all the code is needed.
I mostly use Claude and I can NOT recommend any kimi k2 model. If you need something okish and cheap use open router gpt-oss 120.
AI is a power tool if you don’t know what are you doing you get burned.
I tried to use AI to help me code, it only gave me trash.
Its sometimes useful for conceptual questions but i dont trust code generated by it.
When I use it, I use it to create single functions that have known inputs and outputs.
If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.
I always do a line-by-line analysis of what the AI is suggesting.
Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and “reasoning” can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)
The only code generation assistance I use is in the form of compilers. For fun I tried to use the free version of Chatgpt to replicate an algorithm I recently designed and after about half-hr I could only get it to produce the same trivial algorithms you find on blog posts even with feeding it much more sophisticated approaches.
Single function text prediction, class boilerplate, some refactoring.
It’s decent when you inherit outrageously bad legacy code and you want better comments and variable names than “A, x, i”, etc.
You do have to do it within an editor that highlights all changes so you can carefully review, though.
Not so much a productivity boost, but rather a bad intern you can delegate boring, easy tasks to. I’d rather review that kind of code than write it, but of you’re the other way around, it’s a punishment.
Maybe naming single-letter variables I can see being easier to review than to do.
Any other kind of refactoring though, IDE refactoring tools are instantaneous and deterministic.
When the code your have to deal with is an ASP (not .NET) created by apes throwing shit in a wall, the kind of holistic bullshit an AI makes is an improvement.
As for actual coding, I use ChatGPT sometimes to write SDK glue boilerplate or learn about API semantics. For this kind of stuff it can be much more productive than scanning API docs trying to piece together how to write something simple. Like for example, writing a function to check if an S3 bucket is publicly accessible. That would have taken me a lot longer without ChatGPT.
In short: it basically replaced google and stack overflow in my workflow, at least as my first information source. I still have to fall back to a real search engine sometimes.
I do not give LLMs access to my source code tree.
Sometimes I’ll use it for ideas on how to write specific SQL queries, but I’ve found you have to be extremely careful with this use case because ChatGPT hallucinates some pretty bad SQL sometimes.
I mostly dislike using AI to code. The one exception I recently got into was when I was fighting with a python script and didn’t understand why it was behaving the way it did. I used AI for possible causes and pretty quickly managed to fix it. Sometimes it’s just nice to have some possible causes for a bug listed so you can check them out
I use continue in VSCode hooked to ollama or mistrial. Sometimes I just ask a chat to “make a script/config that does <my MVP of the project, maybe even less>”.
How much I use depends on how little I am invested. My rule is I try to correct a bad output ONCE. I cannot argue it into fucking getting it right.
I prefer net new code and add this feature. Ironically good refactoring goes a long way. The less it has to adjust the better, and less I have to review the better.






