actually-a-cat
- 2 Posts
- 5 Comments
actually-a-cat@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•Guide on setting up a local GGML model?English
1·3 years agoWhat’s the problem you’re having with kobold? It doesn’t really require any setup. Download the exe, click on it, select model in the window, click launch. The webui should open in your default browser.
actually-a-cat@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•kobold.cpp now supports NTK scaling and it worksEnglish
2·3 years agoSmall update, take what I said about the breakage at 6000 tokens with a pinch of salt, testing is complicated by something somewhere breaking in a way that persists through generations and even kobold.cpp restarts… Must be some driver issue with CUDA because it takes a PC reboot to resolve, then the exact same generation goes from gibberish to correct.
actually-a-cat@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•Discovering Locally Run Language Models: Share Your Favorites/Not So Favorites!English
1·3 years agoW-V is supposedly trained for “USER:/ASSISTANT:” but I’ve found it flexible and able to work with anything that’s consistent. For creative writing I’ll often do “USER:/STORY:”. More than two such tags also work, e.g. I did a rpg-style thing with three characters plus an omniscient narrator, by just describing each of them with their tag in the prompt, and it worked nearly flawlessly. Very impressive actually.
actually-a-cat@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•Discovering Locally Run Language Models: Share Your Favorites/Not So Favorites!English
1·3 years agoThe wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one
I’ve been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It’s much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it’s the common factor in all the models that didn’t work well for me.
Those are OpenCL platform and device identifiers, you can use clinfo to find out which numbers are what on your system.
Also note that if you’re building kobold.cpp yourself, you need to build with LLAMA_CLBLAST=1 for OpenCL support to exist in the first place. Or LLAMA_CUBLAS for CUDA.