

Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.


Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.


This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?
This article in the Guardian is definitely worth a read if you’re not intimately familiar with just how it got this way… It’s 8 years old so it won’t cover recent history but does give you an idea of how it started.
And yes Robert Maxwell (father of Ghislaine) is mostly to blame.


Thanks. I may give an updated system prompt like this a shot. Not sure where mine went wrong other than maybe it wasn’t being honored or seen by OpenRouter (I’m not running 120b locally, it’s too large for my set up). I’m actually a bit confused on how to set parameters with OpenRouter.


Yes, I do local host several models. Mostly the Qwen3 family stuff like 30b a3b etc. Have been trying GLM 4.5 a bit through OpenRouter and I’ve been liking the style pretty well. Interesting to know I could just pop in some larger RAM dimms potentially and run even larger models locally. The thing is OR is so cheap for many of these models and with zero data retention policies I feel a bit stupid for even buying a 24 GB VRAM GPU to begin with.


Is this Grok Code fast 1? I’ve noticed it’s hitting tops on OR for programming as of recently. I was going to try it out but it won’t respect my zero data retention preference unsurprisingly.


Honestly it has been good enough until recently when I’ve been struggling specifically with docker networking stuff and it’s been on the struggle bus with that. Yes, I’m using OpenRouter via OpenWebUI. I used to run a lot of stuff locally (mostly 4-b it quant 32b and smaller since I only have a single 3090) but lately I’ve been trying more larger models out on OpenRouter since many of the non proprietary ones are super cheap. Like fractions of a penny for a response… Many are totally free to a point as well.


I tried that! I literally told it to be concise and to limit its response to a certain number of words unless strictly necessary and it seemed to completely ignore both.


I definitely have been looking out for this for a while. Wanting to replicate GPT deep research but not seeing a great way to do this. I did see that there was a OWUI tool for this but it didn’t seem particularly battle-tested so I hadn’t checked it out yet. I’ve been curious about how the new Tongyi Deep Research might be…
That said, specifically for troubleshooting somewhat esoteric (or at least quite bespoke in terms of configuration) software problems, I was hoping the larger coder focused models would have enough built-in knowledge to suss out the issues. Maybe I should be having them consistently augment their responses with web searches if this isn’t the case? I have not been clicking that button typically.
I do generally try to paste in or link as much of the documentation for whatever software I’m troubleshooting though.


The coder model (480B). I initially mistakenly said the 235b one but edited that. I didn’t know you could customize quant on OpenRouter (and I thought the differences between most modern 4 bit quants and 8-bit was minimal as well…) I have tried GPT OSS 120 a bunch of time and though it seems quote unquote ‘intelligent’ enough it is just too talkative and verbose for me (plus I can’t remember the last time it responded without somehow working an elaborate comparison table into the response) and it makes it too hard to parse through things.


Even if this isn’t going to solve the issue of the quality of the LLM’s advice and help, it would massively simplify my current workflow which is copy/pasting logs and command responses and everything into the OWUI window. I’ll check it out. Can you use OpenRouter with VSCode to have access to more models or?


In my opinion, Qwen3-30B-A3B-2507 would be the best here. Thinking version likely best for most things as long as you don’t mind a slight penalty to speed for more accuracy. I use the quantized IQ4_XS models from Bartowski or Unsloth on HuggingFace.
I’ve seen the new OSS-20B models from OpenAI ranked well in benchmarks but I have not liked the output at all. Typically seems lazy and not very comprehensive. And makes obvious errors.
If you want even smaller and faster the Qwen3 Distill of DeepSeek R1 0528 8B is great for its size (esp if you’re trying to free up some VRAM to use larger context lengths)
Looks like it now has Docling Content Extraction Support for RAG. Has anyone used Docling much?
Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
It really depends on how you quantize the model and the K/V cache as well. This is a useful calculator. https://smcleod.net/vram-estimator/ I can comfortably fit most 32b models quantized to 4-bit (usually KVM or IQ4XS) on my 3090’s 24 GB of VRAM with a reasonable context size. If you’re going to be needing a much larger context window to input large documents etc then you’d need to go smaller with the model size (14b, 27b etc) or get a multi GPU set up or something with unified memory and a lot of ram (like the Mac Minis others are mentioning).


This would be a great potential improvement in UX for streaming sports feeds for sure - not having to navigate web pages and start / manage streams manually etc. Does anyone know if this is possible for sites serving these streams like FirstRowSports or StreamEast etc?
Is this what the Leonard Cohen song is about?
First time I’ve seen y’all refer to each other as Beeple and I’m just dropping in here to say I love the term.


True; I think I used LineageOS or similar back when I was still in Android but if you’re not in the 0.01% who do have a custom Android OS installed it seems like a privacy focused map app is still of limited use potentially.
As someone who has used both Evernote and Standard Notes, I can say that the latter is obviously better for privacy but for those who are really looking for a FOSS option and want something more powerful than a simple notes store (albeit with a bit of a learning curve) you should definitely check out Logseq.