Let me preface by saying I despise corpo llm use and slop creation. I hate it.
However, it does seem like it could be an interesting helpful tool if ran locally in the cli. I’ve seen quite a few people doing this. Again, it personally makes me feel like a lazy asshole when I use it, but its not much different from web searching commands every minute (other than that the data used in training it is obtained by pure theft).
Have any of you tried this out?
Playing with it locally is the best way to do it.
Ollama is great and believe it or not I think Googles Gemma is the best for local stuff right now.
I use continue on really simple configs and scripts. Rule of thumb, you can’t “correct” an AI, it does not “learn” from dialogue. Sometimes some more context my generate a better output but will keep doing what is annoying you.
I love sidogen/aichat a lot. It’s really intuitive and easy to put in bash scripts.
Check out LM Studio and/or Anything LLM for quick local experimenting.



