• 0 Posts
  • 101 Comments
Joined 3 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Ah, gotcha. Nothing had been using them yet because I’d only just gotten the API key configured the day prior. But I already had Traefik running several dozen self hosted services that I use all the time, so the only “new” piece was adding API key support to Traefik.

    One of my planned projects is an all-in-one, self-hostable, FOSS, AI augmented novel-planning, novel-writing, ebook and audiobook studio. I’m envisioning being able to replace Scrivener, Sudowrite, Vellum, and then also have an integrated audiobook studio, but making it so that at every step you could easily import or export artifacts to / from other tools.

    Since I also run a tabletop RPG, and there’s a lot of overlap in terms of desirable functionality with novel planning and ttrpg planning, I plan to build it to be capable in that regard, too.

    In both cases, the critical AI functionality that I want to implement (that afaik hasn’t been done well), is how to elegantly handle concepts from the world building section. For example:

    • Automatic State tracking, where a scene following the outline is written or generated, and the changes to state are calculated based off the text.
      • Example: the MC started with $100 and spends $5 buying a magazine. Now MC has a magazine and $95
      • Example: a character leaves the scene, heading to another location
      • Example: a minor character overhears a secret conversation about the villain’s plan
      • Example: a character is killed
    • Manual State tracking
      • Example: MC left the Macguffin with their mentor, but off page the mentor was killed and the Macguffin was stolen by the villain
      • Example: MC thinks something happened, but they misinterpreted it, so the user edits the automatically calculated state with a clarification: this is what MC thinks; this is what actually happened
    • Syncing state changes with timelines.
      • Example: a scene in chapter 8 is a flashback to before the start of the book, so nothing that’s happened since then has happened yet
      • Example: after having written the first draft, you realize you should have introduced the Macguffin much earlier, so you edit a scene in chapter 3 to include a mention of it. The timeline is updated to incorporate that information.
      • Example: you move a scene from chapter 7 to chapter 4 for the sake of pacing. This causes the state at the start of scene to be analyzed and the changes in the scene to be propagated and for any conflicts to be noted, both in this scene and any following ones, e.g., MC had $95 in chapter 4 and $60 in chapter 7, and lost their wallet in this scene, so now MC should have lost a wallet containing $95 and won’t be able to make the purchases they made between this scene and chapter 7
      • Example: You add a new scene in chapter 5 after having already written chapters 6-20. The changes in state due to this scene are propagated out and any resulting conflicts are noted
    • Information concealing
      • Example: MC doesn’t know that the Macguffin has been stolen, and neither does the reader. But if you tell the LLM that it’s been stolen at this point, the generated text will often immediately give this away

    Another critical feature is to have versioning, both automated and manual, such that a user can roll back to a previous version, tag points in time as Rough Draft, Second Draft, etc…

    I’d also like to build an alpha / beta reader function - share a link and allow readers to give feedback (like comments in particular sections, highlights, emoji reactions, as well as reporting on things like reading behavior - they reread this section or went back after reading this section - that could be indicative of confusing writing), and also enable soliciting the same sort of feedback from AIs, and building tools to combine and analyze the feedback.

    I could go on about the things I’d love to build in that app, but then I’d be here all day.

    I don’t have that tool built yet, obviously, but it has a need to integrate with everything I’ve worked on - LLMs, embeddings, image generation, audio generation - heck, even video generation could be useful, but that’s a whole different story on its own.

    That app will need to be able to connect to such services from the browser or the backend directly, depending on the user’s preferences and how the services are configured.

    In the meantime, having API key support means I can use my self hosted services with other tools.

    • the FOSS NotebookLM clone supports that.
    • I still haven’t touched N8N, but I’d been (and still am) planning to.
    • I’d been toying with subbing to Novelcrafter, which allows you to connect to an ollama instance.
    • I learned about PlotBunni around the time of this comment and spun up my own instance, then forked the project and added support for API keys and made some other bug fixes… I started adding support for storing data on the server and synchronizing it but never fully got that working before having to set the project aside to focus on my day job.
    • I can now use the Comfy UI Remote app outside of my own network (I think I was already able to do this before by configuring a service user in my auth provider and enabling basic authentication with a base64 encoded username/password as the Bearer token) which is nice because Comfy is a pain to use on a phone
    • Likewise with Kokoro - there is (or was - unsure if it’s been fixed) a bug in the web client that means only Chrome browsers can use it, but because I added API key support to the server, I can expose the service and access it from outside my network with a different client running on my phone

    I’ve been pretty busy and haven’t really touched any of this in over a month now, but it’s certainly not for lack of use cases.



  • I genuinely don’t understand why people here are taking it so hard that I wish the Immich devs were using semver.

    Because you didn’t say that; you said “Breaking changes in a point release? Not cool” and later “I’m basing this off the guidelines at semver.org.”

    I’m paraphrasing your comments from memory, to be clear, so apologies if I misquoted you.

    It certainly felt to me like you were assuming that this project was using semver and was not following it well, not that you wouldn’t want to use a project that receives this many breaking changes / that doesn’t follow semver. Those complaints both make a lot more sense to me - and I’ve seen many people say similar things about Immich in the past. In fact, it’s a big part of why I haven’t migrated from Photoprism to Immich myself - in this regard they’re complete opposites.


  • I don’t think there’s any room to argue that announcing a 1.x with a change the developers say is a breaking change, which is what Immich have done, fits within the semver.org guidelines.

    That wasn’t the argument.

    Following semver is optional. If a project doesn’t explicitly state it is following semver, it shouldn’t be assumed that it is. With regard to Immich in particular, a cursory review of their documentation makes it clear that they are not following semver. Literally, go to https://immich.app/ and read the text at the very top of the page:

    ⚠️ The project is under very active development. Expect bugs and changes.

    Go to the repo and you’ll see the README, which states at the very top:

    • ⚠️ The project is under very activedevelopment.
    • ⚠️ Expect bugs and breaking changes.

    If you can read that, see that they’re on major version 1 with a minor version over 100, and you still think they’re using semver, then that’s on you.

    The devs have stated they won’t be using semver until they consider Immich production ready, and that moving to a 1.x version from 0.x was a mistake made some time ago. If you want to think about it as though it is semver, consider the major version to still be 0. See https://github.com/immich-app/immich/discussions/5086#discussioncomment-7593227 for example.

    As this project is clearly not following semver, the semver guidelines aren’t applicable and haven’t been violated.

    I don’t think there’s any room to argue

    Even if semver were applicable, in this case, I would still disagree. The text from semver.org states:

    8. Major version X (X.y.z | X > 0) MUST be incremented if any backward incompatible changes are introduced to the public API.

    It doesn’t state that any backward incompatible changes, period, require a major version increase, only changes to the public API. I would personally argue that the deployment configuration is part of the public API, but not all project owners agree with me. Even if they do agree, they might say that this was not a documented deployment configuration and thus not part of the public API, and that it therefore doesn’t necessitate an increase to the major version, but as they knew that people were using that configuration, anyway, they included a note about a potentially breaking change as a courtesy to those users.






  • Is your goal to create things that can be published or used in a project, or to create audiobooks for yourself to listen to?

    For voiceovers for text, I use Kokoro Fast API, which has a web frontend. The frontend is only compatible with Chromium browsers on desktop or Android, which sucks as my daily driver is Firefox and an iPhone (there are workarounds in the thread) but it supports voice mixing, speed changes, etc… It also has an issue where it keeps the models (about 3GB) in memory; I keep the CPU version loaded normally and swap to the GPU version if I need it to be faster. If you want something similar for Bark, check out Bark-GUI.

    I’ve also dabbled a bit in some TTS features that have Comfy nodes, though at this point mostly just in terms of getting them set up. For my purposes thus far Kokoro has been fine (and I prefer the FastAPI project over the Comfy nodes for most of my uses), but I’ve found nodes for Kokoro, Dia, F5 TTS, Orpheus, and Zonos.

    Autiobooks and audiblez both look promising. A few weeks ago, I used the Kokoro FastAPI web frontend to create an audiobook for an ebook I worked on that used entirely self-hosted AI generation for the outlining and prose. Audiblez, which I found about like two days after that, looks like it would have simplified that process substantially. Still, I’d personally like something more like an audiobook studio, where I can more easily swap voices back and forth, add emotions, play with speed on a more granular level, etc… I’m thinking about building something that contains that at some point myself, but it’ll be a minute - hopefully someone else will beat me there.

    I posted a comment here a few weeks back on a similar topic. I’ve since used OpenReader-WebUI and like it, though that’s not for producing audiobooks, but for a read-along experience. Reproducing the comment below in case it’s helpful for you:

    If you want to generate audiobooks using your own / a hosted TTS server, check out one of these options:

    • OpenReader-WebUI - this has built-in read along capability and can be deployed as a PWA that can allow you to download the audiobooks to your phone so you can use them offline
    • p0n1/epub to audiobook
    • ebook2audiobook If you don’t have a decent GPU, Kokoro is a great option as it’s fast enough to run on CPU and still sounds very good. If you’re going to use Kokoro, Audiblez (posted by another commenter) looks like it makes that more of an all-in-one option. If you want something that you can use without an upfront building of the audiobook, of the above options, only OpenReader-WebUI supports that. RealtimeTTS is a library that handles that, but I don’t know if there are already any apps out there that integrate it. If you have the audiobook generation handled and just want to be able to follow along with text / switch between text and audio, check out https://storyteller-platform.gitlab.io/storyteller/

  • Right now I have Ollama / Open-WebUI, Kokoro FastAPI, ComfyUI, Wan2GP, and FramePack Studio set up. I recently (as in yesterday) configured an API key middleware with Traefik and placed it in front of Ollama and Comfy, but currently nothing is using them yet.

    I’ll probably try out Devstral with one of the agentic coding frameworks, like Void or Anon Kode. I may also try out one of the FOSS writing studios (like Plot Bunni) and connect my own Ollama instance. I could use NovelCrafter but paying a subscription fee to use my own server for the compute intensive part feels silly to me.

    I tried to use Open Notebook (basically a replacement for NotebookLM) with Ollama and Kokoro, with Kokoro FastAPI as my OpenAI endpoint, but turns out it only supported, and required, text embeddings from OpenAI, so I couldn’t do that fully on my local. At some point, if they don’t fix that, I’m planning to either add support myself or set up some routes with Traefik where the ones OpenNotebook uses point to the service I want to use.

    ETA: n8n is one of the services I plan to set up next, and I’ll likely end up integrating both Ollama and Comfy workflows into it.



  • You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.


  • Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change ollama run to ollama pull , though. Instructions: https://huggingface.co/docs/hub/ollama

    You should be able to fit Qwen3 32B at Q4_K_M with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including /no_think at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.

    The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is Q4_K_M. I personally am using the Q6_K quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.

    I’m not sure if Q4_K_M is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.

    Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.

    Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.



  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.