Posts

Showing posts from October, 2025

Running your own local AI with Ollama

Image
  Run Your Own Local AI with Ollama + Open WebUI on Proxmox Ever wanted to have your own local AI assistant running right from your homelab? Instead of relying on cloud services, you can set up a lightweight yet powerful AI environment inside Proxmox . In this guide, I’ll show you how I deployed Ollama (for models) and Open WebUI (for the interface) in separate LXC containers . This way, you get a clean modular setup that works even on modest hardware. My Setup Hypervisor: Proxmox VE Container 1 (Ollama): Debian 13, 2 cores, 4GB RAM, 8GB swap Container 2 (Open WebUI): Debian 11, 1 core, 1GB RAM, 2GB swap (optional) Step 1: Deploy Ollama Inside the first LXC container (Debian 13, 4GB RAM, 2 cores, 8GB swap): curl -fsSL https://ollama.com/install.sh | sh Pull some lightweight models to test: ollama pull phi3 ollama pull llama3.2 ollama pull gemma3:270m ollama pull tinyllama Arena comes preinstalled by default in Ollama. Models I Installed Here’s what...