I'm playing around with LocalAI. It uses Docker containers to do this, so I'd doing it in Fedora. My goal is finding some way of making some extra money with retirement age on the horizon, and if I can figure out something that works really well I wouldn't be so reluctant to spend what it takes to improve my overall situation. I was going to use one of their quick start images designed for AMD GPU's, but I couldn't get it to run. On Reddit some users reported having trouble getting it to run on the same card I have in this machine. Looks like the majority of them are using Nvidia cards. They also have an image to just run it on a CPU, which is what I'm doing right now. It's not ideal but at least I can play around with it. The quick start image came with a few models to start with. GPT4 works good, but stablediffusion not so much. So I switched back to GPT4 and asked about the error:
LocalAI_session.jpg
No, the error message "Internal error: rpc error: code = Unknown desc = unimplemented" you encountered does not specifically relate to the choice of using a CPU or GPU for running Stable Diffusion or Local.A.I. This error message usually indicates that the requested feature or functionality is not implemented or supported by the current version of the software you are running.
To address the Stable Diffusion image generation task:
If running Stable Diffusion locally on a CPU, it will generally be significantly slower compared to using a GPU. This is due to the computationally intensive nature of neural networks and the fact that the training and inference is done through a tensor processing operation, which is more efficiently performed on a GPU.
Ensure you have installed the necessary dependencies and the correct version of Stable Diffusion and the associated libraries are properly installed.
Check your command line or script to make sure you are using the correct path for the Stable Diffusion executable and the correct parameters.
If you are using the CLI version of Local.A.I., ensure you are using the right version of the CLI and the appropriate parameters are set for running Stable Diffusion.
Double check the documentation to see if there are any prerequisites or instructions for setting up the environment.
If you are still facing issues, you can try contacting the support or community channels of the specific software you are using.
I'll download some other models to try out. I do have a used Nvidia 3060 somewhere, I got it the last time I was thinking of trying this, but I wouldn't want to put it in this machine, I'd probably put it in something else. It's kind of a weak card as far as Nvidia cards go though.
I wouldn't have the first clue about what I'd want to do with that
Looks interesting to play with (but I'd need some sort of purpose to get into it).
Re: LocalAI
Posted: Sun Feb 15, 2026 8:57 am
by Zema Bus
I installed another model and got it to run. Thought I'd try a "quick" test, it's currently working on creating the image.
image_generation.jpg
Looks like it's going to be a while, maybe by the time I wake up tomorrow lol! I'm not about to try creating a video with just my CPU
Re: LocalAI
Posted: Sun Feb 15, 2026 4:15 pm
by TheeRadioDJ
All this for chocolate mayonnaise?
Re: LocalAI
Posted: Sun Feb 15, 2026 6:30 pm
by Zema Bus
lol!
Re: LocalAI
Posted: Sun Feb 15, 2026 9:01 pm
by Grogan
You could also rent a slot somewhere to process shit like that. I know my webhost has been pushing GPU servers for doing AI shit (but they wouldn't be a cheaper one)
However, if you're going to spend money it might as well be on hardware.
The first thing that came up in a search for that service was Digital Ocean... I've got all of their networks blocked (bots from their cloud) in iptables rules here
Pricing information:
With a 12-month commitment, H100 × 8 priced at $1.99/GPU/hour. MI325X × 8 costs $1.69/GPU/hour, and MI300X × 8 to $1.49/GPU/hour. On-demand pricing starts at $0.76/hour for RTX 4000 Ada, $1.57/hour for RTX 6000 Ada and L40S, $1.99/hour for AMD MI300X (single GPU), $3.39/hour for a single H100, and $3.44/hour for H200.
Just for one example. It would add up pretty quickly.
Re: LocalAI
Posted: Sun Feb 15, 2026 10:30 pm
by Zema Bus
Thanks Grogan, yeah I've been weighing the pros and cons of the two options. Privacy is another factor when it comes to local vs cloud, I'd prefer to keep my data local. Depending on the amount of usage the cost of a cloud provider could reach the cost of a 5090 in a year.
Re: LocalAI
Posted: Sun Feb 15, 2026 11:59 pm
by Zema Bus
The 5090's are all over twice the MSRP, and most are sold out. And the 4090's are pretty much just gone from the market, the few I saw were as much as an overpriced 5090.
Re: LocalAI
Posted: Mon Feb 16, 2026 12:14 am
by Grogan
This AI craze is ruining everything. It's hard to even operate a web server anymore because of that shit.
If hardware prices followed the trend before the cryptocurrency, then AI bullshit, I'd upgrade some shit or do another build but no way, not now. What would have been a $300 video card is now $1200
Re: LocalAI
Posted: Mon Feb 16, 2026 5:08 am
by Zema Bus
Hopefully the AI bubble eventually bursts, just like the cryptocurrency bubble and the dot-com bubble before that. I heard someone speculating that when that happens we might get access to some interesting HBM parts.
Well I ended up buying a refurbished 3090 with 24 GB memory. The 3060 I got a few years ago was also refurbished. Looking at the reviews most of the buyers said they bought it to do AI stuff. It was the cheapest hardware option I could find to do what I want to do.
This pre-owned or refurbished product has been professionally inspected and tested to work and look like new. How a product becomes part of Amazon Renewed, your destination for pre-owned, refurbished products: A customer buys a new product and returns it or trades it in for a newer or different model. That product is inspected and tested to work and look like new by Amazon-qualified suppliers. Then, the product is sold as an Amazon Renewed product on Amazon. If not satisfied with the purchase, renewed products are eligible for replacement or refund under the Amazon Renewed Guarantee.
Re: LocalAI
Posted: Mon Feb 16, 2026 5:52 am
by Grogan
Hopefully it lasts long enough for you to experiment with, at least. It's probably more like, "customer buys video card, uses the shit out of it for a month for GPU processing, returns it"
Re: LocalAI
Posted: Thu Feb 19, 2026 8:46 am
by Zema Bus
The 3090 showed up today, I tested it after work and it seems to be working good. I don't think this is the machine I'm going to run it in (Intel i5), I have a more powerful AMD 5900X with 64 GB DDR4 that I was planning to put it in.
3090.jpg
Re: LocalAI
Posted: Thu Feb 19, 2026 11:05 am
by Grogan
Cool, I hope it works correctly for you for GPU processing in that LocalAI program.
Re: LocalAI
Posted: Sun Feb 22, 2026 7:52 am
by Zema Bus
I have the 3090 in the machine I'm going to use for this. I ended up using my Intel i7 12700KF (20 logical cores) with the same MSI MB you have. It was right there under the table vs the other one that's buried in the closet. It's the machine I was using for my work related stuff and personal email etc before I switched to the mini computer. I thought it had 64 GB too, but it only had 32 GB. I cannibalized another machine (the one I tested the card in initially) of it's 32 GB. So now it has 64 GB of (mismatched) memory I also cannibalized some drives, mostly small ones but enough to get started. It would be interesting to also see how this machine does for gaming. I haven't gamed with an Nvidia card in many years. It's going to share the monitors and keyboard/mouse with my game machine. I have a USB switch box and this machine will be connected via the HDMI ports while my game machine is connected via DP ports. It's the "poor man's KVM" lol! Only one machine at a time can be running this way, and I could hit the HDMI bug, but this machine will be powered off most of the time. I just don't have enough space in this house. At least not until I bite the bullet and have a metal building put up.
aimachine.jpg
Re: LocalAI
Posted: Sun Feb 22, 2026 11:15 am
by Grogan
I used to do something like that. One old computer connected to VGA, and the other to DVI. As long as I only started one of them at a time. "Poor man's KVM"
Re: LocalAI
Posted: Tue Mar 10, 2026 8:20 am
by Zema Bus
I realized the drives in this machine aren't going to be big enough for this for very long. The LLMs take up a lot of space. I couldn't find anything else laying around other than other small drives. Last May I got a 4 TB NVMe drive for my game machine for $250. That same drive is now going for over $600, and many others are over $800. Then I remembered I had a new still in the box 4 TB mechanical HD I got about 4 years ago and never ended up using because big SSD's had become cheap enough. It's a Toshiba drive. So I guess I'll go old school for this lol!
Re: LocalAI
Posted: Tue Mar 10, 2026 9:06 am
by Grogan
Fuck... I was considering a 4 Tb NVME for my gaming drive, to give myself more space. But I'm not spending $800.
So memory based storage is expensive because assholes are hoarding the chips. Magnetic storage is almost unavailable, because that's what the datacenters are using for storage.
You just can't have anything anymore.
Re: LocalAI
Posted: Sun Mar 15, 2026 2:02 am
by Zema Bus
I did some more scrounging around and found a few more 2 TB drives. I put those in along with the 4 TB Toshiba. The manufacture date on the Toshiba was July 2021, it's been waiting awhile.
AI_machine_drives.jpg
That decal has been on there for a while. This used to be my work machine, that name came from The Expanse - "Spanish for workhorse". I'm hoping it does some work for me
blueaimachine1.jpg
blueaimachine2.jpg
I found a very cheap, by today's standards, 4 TB NVMe on Amazon, not all that much higher in price than what I paid last year for one, $288 vs $250. it was back ordered but it said it would be shipped and sold by Amazon. It said ready to ship in 1 - 2 days. I actually don't know whether this will really happen, I've had similar situations where the item never ships and then I get a refund from Amazon. I thought it was worth a shot for that price. It says it will arrive Thursday.
Re: LocalAI
Posted: Sun Mar 15, 2026 3:02 am
by Grogan
In a song by a Canadian metal band (think of them like an artsy fartsy metal band), Rush. A song about a black hole, in the Cygnus-X1 system. The ship was called "The Rocinante". It was the name of Don Quixote's horse (yes, cleverly named for "work horse"). I only know that because I liked the song when I was younger and was curious about the name of the ship. That sticker implies that woman is the "work horse"
That's a lot of storage, I thought games needed a lot.
Re: LocalAI
Posted: Sun Mar 15, 2026 3:50 am
by Zema Bus
In The Expanse, when they rename the ship (changing the transponder) to that, Amos said he knew a lady named Rocinante Then when they painted the name on the hull of the ship they included that woman.
I'll look up that song, I don't know if I've heard it
Re: LocalAI
Posted: Sun Mar 15, 2026 4:11 am
by Grogan
Oh, I didn't think you'd be interested or I'd have looked it up. I still have the record album it came from (Farewell to Kings). It's a cool song, though I don't know if it has much musical value to me anymore. I'm going to give it a listen again. Listen to the intro and then jump ahead to around the 3 minute mark to bypass monotonous noise (the song actually does start and it does tell a bit of a story and gets into a few nifty musical passages)
P.S. Lyrics:
In the constellation of Cygnus
There lurks a mysterious, invisible force
The Black Hole
Of Cygnus X-1
Six Stars of the Northern Cross
In mourning for their sister’s loss
In a final flash of glory
Nevermore to grace the night…
1
Invisible
To telescopic eye
Infinity
The star that would not die
All who dare
To cross her course
Are swallowed by
A fearsome force
Through the void
To be destroyed
Or is there something more?
Atomized — at the core
Or through the Astral Door —
To soar…
2
I set a course just east of Lyra
And northwest of Pegasus
Flew into the light of Deneb
Sailed across the Milky Way
On my ship, the ‘Rocinante’
Wheeling through the galaxies,
Headed for the heart of Cygnus
Headlong into mystery
The x-ray is her siren song
My ship cannot resist her long
Nearer to my deadly goal
Until the Black Hole —
Gains control…
3
Spinning, whirling,
Still descending
Like a spiral sea,
Unending
Sound and fury
Drowns my heart
Every nerve
Is torn apart….
To be continued…
Where it's continued, is on their next album, "Hemispheres". Our traveler emerges during the time of greek gods and such.
Re: LocalAI
Posted: Sun Mar 15, 2026 4:59 am
by Zema Bus
Thanks Grogan
Re: LocalAI
Posted: Thu Mar 19, 2026 6:19 pm
by Zema Bus
I anticipated this given the low price.
orderCancelled.jpg
Yeah, still available from other sellers for 2 - 3 times that price.
Re: LocalAI
Posted: Thu Mar 19, 2026 8:36 pm
by Grogan
Bummer... that's pretty irresponsible of them. They think that if they do things like that, you'll substitute another (higher price) item in their marketplace instead of going elsewhere if they'd have just told you it was sold out in the first place. Walmart does shit like that too, send out flyers to get you to come to the store and then be "sold out" of the item you came for, with (for example) aisles of more expensive toilet paper etc.
It's time for a reckoning... politicians, tech bro megalomaniacs, financiers that do nothing but suck society's wealth up etc.
Re: LocalAI
Posted: Fri Mar 20, 2026 7:06 am
by Zema Bus
One time Jay of Jayz2Cents was complaining about that, local stores would tell him over the phone that a part was in stock but would be out of stock when he drove there. Then another time he called them on his cellphone while he was at the store, and they lied to him again.
Re: LocalAI
Posted: Mon Mar 23, 2026 9:32 am
by Zema Bus
This was generated in an LLM run in LM Studio. I gave it an example image, it pulled my name from within the image. I created it more than 30 years ago but added my name 20 years ago. You can see it "thinking" before it finally spits out a response. Total wattage for this was 200.
thinking_screencapture.jpg
Which local LLM's can create a cartoon image based on the style of an uploaded cartoon?[img-1]<|im_end|>
<|im_start|>assistant
First, a quick clarification: **LLMs (Large Language Models)** like Llama 3 or Mistral are designed for text (writing, coding, chatting). They cannot generate images directly.
To create cartoons locally based on an uploaded image style, you need to use **Local Image Generation Models**, specifically those based on the **Stable Diffusion** architecture.
Here is the breakdown of the tools and methods you need to achieve this specific "sketchy cartoon" look:
### 1. The Core Software (The Engine)
You cannot do this with a standard LLM chatbot. You need an interface that runs Stable Diffusion locally on your computer (requires a decent NVIDIA graphics card).
* **Automatic1111:** The most popular, user-friendly interface for running Stable Diffusion locally.
* **ComfyUI:** A node-based interface that is faster and more powerful but has a steeper learning curve.
### 2. The Specific Technique: ControlNet
To create an image "based on the style of an uploaded cartoon," you need to use a technology called **ControlNet**. This allows the AI to look at your uploaded drawing and copy its structure (lines, composition) while applying a new artistic style.
* **How it works:** You upload your reference cartoon into ControlNet. The AI uses the lines from that image as a guide but generates a *new* image using the "style" you prompt for.
* **For this specific style:** You would likely use the **"Canny"** or **"Scribble"** pre-processor in ControlNet to match the loose, hand-drawn lines of the Steve Laseman cartoon in your picture.
### 3. The Specific Style: LoRA Training
The image you uploaded is by a specific artist (Steve Laseman / Wild Macaw). To get that *exact* look (the specific way he draws noses, eyes, and coloring), generic Stable Diffusion won't be enough. You would need to use a **LoRA** (Low-Rank Adaptation).
* **What is it?** A small file attached to the main model that teaches the AI a specific style or character.
* **How to do it:** You would download a dataset of Steve Laseman's cartoons, train a LoRA on them using Kohya_ss, and then load that LoRA into Automatic1111 when generating your cartoon.
### Summary: The Best Local Setup for This Task
If you want to run this locally right now without training a model yourself, here is the best approach:
1. **Install:** Download **Stable Diffusion WebUI (Automatic1111)**.
2. **Download Models:** Get a "Cartoon" or "Sketchy" checkpoint model (like *AnythingV5* or specific anime/cartoon checkpoints).
3. **Use ControlNet:** Enable the ControlNet extension in the UI. Upload your reference cartoon and select the **"Canny"** edge detector. This will force the AI to draw over the lines of your uploaded image, mimicking its composition but generating it with a cartoon style.<|im_end|>
<|im_start|>user
Would an Nvidia 3090 video card in a system with 64 GB DDR4 be sufficient for this task?<|im_end|>
<|im_start|>assistant
**Yes, absolutely.** That is actually an excellent setup for this task—arguably one of the best consumer setups available right now for local AI work.
Here is why your specific hardware configuration (RTX 3090 + 64GB RAM) is perfect for what you want to do:
### 1. The RTX 3090 (24 GB VRAM)
In the world of Local AI, **VRAM (Video Memory)** is the most critical factor. It determines how large a model you can run and whether you can train new models without crashing.
* **The "King" of Consumer AI:** The 3090 has 24GB of VRAM. This is massive compared to standard cards like the RTX 3060 (12GB) or even the newer RTX 4070 (12GB).
* **Why it matters for you:** To create a cartoon in a specific style, you will likely want to use **Stable Diffusion XL (SDXL)** models. SDXL requires more memory than older models. With 24GB VRAM, the 3090 can run SDXL comfortably at full speed without crashing.
* **Training LoRAs:** If you decide later that you want to train a custom "LoRA" (a mini-model) specifically on Steve Laseman's art style so your AI draws exactly like him, the 3090 is powerful enough to do this locally in a reasonable amount of time.
### 2. The 64 GB DDR4 System RAM
While VRAM does the heavy lifting, system RAM acts as the staging area for data.
* **Buffering:** When you are training an AI or generating high-resolution images, the computer loads image datasets into your system RAM before sending them to the GPU. 64GB is a very healthy amount; it ensures you won't run out of memory while loading large folders of reference cartoons.
* **Overflow Protection:** If a model gets too big for the VRAM (which happens with massive models), the software can offload some data into your system RAM. Having 64GB means this "overflow" will be fast and won't cause your computer to freeze or crash.
### Summary
You have a **"High-End Workstation"** level setup for AI generation.
* **Can you generate cartoons?** Yes, instantly.
* **Can you train custom styles?** Yes, easily.
* **Will it run Stable Diffusion WebUI (Automatic1111)?** Yes, perfectly.
You are well-equipped to start this hobby immediately.<|im_end|>
<|im_start|>user
Would this also work for video creation?<|im_end|>
<|im_start|>assistant
<think>
The user is asking if their hardware setup (RTX 3090 + 64GB DDR4) would be sufficient for *video* creation using local LLMs/AI models.
1. **Analyze the Hardware:**
* **GPU:** NVIDIA RTX 3090 (24 GB VRAM). This is a beast of a card, especially for AI. The 24GB VRAM is the key selling point here.
* **RAM:** 64 GB DDR4. Very generous amount.
2. **Analyze the Task: Local Video Generation.**
* Current state of local video generation (as of late 2023/early 2024):
* Models like Stable Video Diffusion (SVD), AnimateDiff, and newer models like Sora (not public yet) or Gen-2.
* These tasks are significantly more computationally intensive than image generation.
* **VRAM Requirements:**
* Generating a short video clip (e.g., 4 seconds at 1080p) requires loading multiple frames into memory simultaneously to maintain temporal consistency. This eats VRAM fast.
* Training video models is even harder, but the user likely means *inference* (generating videos from prompts/images).
* **System RAM:** Video processing often involves buffering large amounts of data. 64GB is good, but the GPU VRAM is usually the bottleneck for video generation speed and resolution.
3. **Evaluate Suitability:**
* **Yes, it will work.** The 3090 is widely considered the "king" of consumer AI hardware specifically because of that 24GB VRAM. It allows for higher resolutions and longer durations than cards with less memory (like a 16GB or 12GB card).
* **Limitations:** Even with a 3090, local video generation is slow compared to cloud services. A 5-second clip might take minutes to generate. It's not real-time.
* **Software:** Tools like ComfyUI (with AnimateDiff nodes) or Automatic1111 (with SVD extensions) are the standard ways to do this locally.
4. **Formulate the Answer:**
* **Direct Answer:** Yes, it will work very well.
* **Why?** Reiterate the importance of VRAM (24GB is huge for video).
* **What can you do?** Mention specific tools like AnimateDiff and Stable Video Diffusion.
* **Caveats/Expectations:** Manage expectations about speed (it's slow) and resolution (usually lower than 1080p unless using upscaling tricks).
* **Comparison:** It's better than most consumer cards, but still slower than a cloud server with an A100/H100.
5. **Refine the Tone:** Keep it encouraging but realistic about the "local" aspect (it takes time to render).
6. **Self-Correction/Check during drafting:**
* *Wait, did I mention LLMs for video?* The user asked "Would this also work for video creation?" in the context of their previous questions about LLMs/AI models. While they said "LLM" earlier (which is technically incorrect for images), they likely mean "AI tools." I should clarify that *video generation* uses diffusion models, not text-only LLMs, but the hardware handles both.
* *Is 64GB RAM enough?* Yes, absolutely. Some people run into issues with <32GB when doing heavy video processing or training, so 64GB is a safe buffer.
7. **Final Output Structure:**
* Direct confirmation (Yes).
* Explanation of why (VRAM focus).
* Specific tools to use (AnimateDiff, SVD).
* Reality check on speed/quality.
</think>
**Yes, absolutely.** Your setup is actually one of the best consumer configurations available for local video generation.
While image generation is computationally heavy, **video generation is significantly more demanding** because the AI has to generate multiple frames in a sequence while keeping them consistent (so the character doesn't morph into someone else).
Here is why your specific hardware will handle it well:
### 1. The VRAM Bottleneck (The RTX 3090)
Video generation models (like **Stable Video Diffusion** or **AnimateDiff**) load multiple frames of video into memory simultaneously to calculate motion. This consumes VRAM much faster than static images.
* **24GB is the Sweet Spot:** Most consumer cards have 12GB or 16GB of VRAM, which often forces you to generate very short clips (2-3 seconds) at low resolution before crashing. The **RTX 3090's 24GB** allows you to run higher resolutions and slightly longer durations without running out of memory.
### 2. System RAM (64 GB DDR4)
Video processing involves buffering large amounts of data between the CPU and GPU.
* **Plenty of Headroom:** While 32GB is often considered "minimum" for serious AI work, having **64GB** ensures that you won't hit a bottleneck while loading datasets or running complex workflows in software like ComfyUI.
### What You Can Do With This Setup Locally:
With this hardware, you can run the following popular local video tools:
* **AnimateDiff:** This is currently the most popular way to create stylized animations locally. It allows you to take a static image (like your cartoon) and turn it into a 4-second animation loop.
* **Stable Video Diffusion (SVD):** A model specifically designed for turning images into short video clips.
* **Deforum:** Great for creating "trippy" morphing videos, though less useful for character consistency.
### The Catch: Speed and Time
While your hardware is powerful enough to *run* the software, you must manage expectations regarding speed:
* **It will be slow.** Generating a 4-second video clip locally on an RTX 3090 can take anywhere from **5 minutes to 20 minutes**, depending on the resolution and settings. It is not real-time.
* **Resolution limits.** While you *can* generate at higher resolutions, it requires massive VRAM. You will likely find yourself generating at lower resolutions (e.g., 512x512 or 768x768) and then using an "upscaler" to make them look crisp.
**Verdict:** Your system is a powerhouse for this specific hobby. It is fully capable of running the cutting-edge local video generation tools available today.
I didn't see those models being available in LM Studio, I might have to use a different application for that.
Re: Local AI
Posted: Sun Apr 05, 2026 10:36 pm
by Zema Bus
I'm now experimenting with Stability Matrix, it's an AI manager for one-click installs of various UI's and models designed for AI art creation - images, videos etc. It generated these two images surprisingly fast with minimal prompts:
The only prompt for this one was for an advertisement for sardine flavored ice cream. This could be expanded out to something more elaborate. It only took 2 secs to create.
00000-3898632695.png
This one took maybe 10 secs. For the prompt I specified a CyberPunk style wallpaper with large nearby planets in the background. I set the image size to 1920x1080.
00001-626983204.png
Lots of settings to learn how to use but this was a good start. I installed through the AUR but they have a tarball on their website and on their github page
Re: Local AI
Posted: Sun Apr 05, 2026 11:08 pm
by Grogan
That's a pretty nice wallpaper. A little abstract on the planets and moons, but I'm sure that could be refined.
So now you're actually doing something useful with this.
Re: Local AI
Posted: Sun Apr 05, 2026 11:30 pm
by Zema Bus
Yeah I looked at some prompt examples that could be used to improve on that.
Maybe now I can finally do stuff like create artwork for T-shirts to sell using the domain I've been sitting on for about 15 years
Re: Local AI
Posted: Mon Apr 06, 2026 8:38 am
by Zema Bus
The image creation models have trouble with text, it's a problem they are working on. There are some work-a-rounds but the simplest may be to just edit the image in Gimp. It understands the text you give it in prompts, but once it creates an image it's a different story. Kind of like someone who can read but can't write. Regular AI can generate text just fine, it just can't create images. Here's a few more. You can see how it borked the text in the peanut butter flavored mayonnaise ads.