In a totally unrelated note, https://msty.app/ is extremely easy to use and runs the LLM locally. Good choices are Llama 3.2; Granite; Deepseek r1; Dolphin 3… can run on Nvidia, Apple and cpu. They say AMD too and know how to since November but it’s not working. Not as friendly but https://lmstudio.ai/ runs on just about any hardware.
Those are quite the minimum specifications to run it! Damn! Have they ever heard of “optimization”? Because in 4 minutes, SD 3.5 medium got
out of my iPhone 13 Pro, running locally… with a total of 6gb of system ram. So I’m going to say that 32 gb AND a RX 7900 are a little laughable (even if much faster no doubt).
I have a few examples that I hope retain their metadata.
Seed mode is… basically, I stopped using Automatic1111 a long time ago and kinda lost track of what goes on there but in the app I use (Draw Things) there’s a seed mode called Scale Alike. Could be exclusive, could be the standard everywhere for what I know. It does what it says, changing resolution will keep things looking close enough.
Edit: obviously at some point they had to lose the bloody metadata….
This isn’t what you asked specifically, but it’s related enough… have a look into https://apps.apple.com/it/app/draw-things-ai-generation/id6444050820?l=en-GB as it’s free, ad free, free from tracking and really well optimized. With that I can run Schnell on my iPhone 13 Pro!
Yep. No artistic skills here, fun shitposts like this one for me are only fueled by some sort of generative AI. In this instance I tried a few different models (initially I got a woman with a red shirt uniform and a bee logo, then a red shirt with yellow and black highlights, a different model gave me a buff, hairy bee in a red shirt uniform… on top of the enterprise, holding a small enterprise in a hand) but had to give up and use Flux even if it’s the most intensive to run. Hardly perfect, but it’s fun and easy to recognize :)
Monkey beer island of green and fight!