Local AI
Running AI on personal hardware
14 episodes
#346: GPU Scaling: The "Go Wide or Go Tall" Dilemma
Should you use a fleet of cheap GPUs or one powerhouse? Learn the math behind serverless GPU costs, cold starts, and batching efficiency.
#168: Digital Vaults: The Mainstream Rise of Air-Gapped AI
Discover why air-gapping is going mainstream in 2026 and how organizations are securing local AI models using "digital vaults."
#110: Building the Ultimate Local AI Inference Server
Learn how to build a high-performance local AI server for agentic coding, from dual-GPU PC builds to the power of Mac's unified memory.
#82: Why GPUs Are the Kings of the AI Revolution
From video game dragons to digital brains: Herman and Corn explain why your graphics card is the secret engine behind the AI boom.
#75: The Future of Local AI: Stable Diffusion vs. The New Guard
Is Stable Diffusion becoming a relic? Corn and Herman debate the rise of Flux, the privacy of local AI, and the future of open-source generation.
#55: Running Video AI at Home: The Real Technical Challenge
Video AI: Hype vs. Reality. Can your GPU handle it? We dive into the technical challenges of running video AI at home.
#41: Local AI Unlocked: The Power of Quantization
Unlock powerful AI on your device! We demystify quantization, the ingenious trick making local AI a reality.
#40: Unlocking Local AI: Privacy, Creativity & Compliance
Local AI: privacy, creativity, and compliance. Discover why keeping AI close to home is more than a trend.
#38: AI Supercomputers: On Your Desk, Not Just The Cloud
AI supercomputers are landing on your desk! Discover why local AI is indispensable for enterprises facing API costs, latency, and privacy.
#34: Red Team vs. Green: Local AI Hardware Wars
NVIDIA's CUDA rules AI, leaving AMD users battling a "green wall." Explore the hardware wars and thorny paths forward.
#27: AMD AI: Taming Environments with Conda & Docker
Tired of AI environment headaches on AMD? We demystify Conda, Docker, and host environments to unlock your GPU's full potential.
#25: GPU Brains: CUDA, ROCm, & The AI Software Stack
Unraveling how GPUs power AI. We dive into CUDA, ROCm, and the software stack that makes it all think.
#17: Cloud Render Superpowers: Local Edit, Remote Muscle
Unleash cloud superpowers! Edit locally, render remotely with AI-accelerated GPUs like NVIDIA A100s.
#2: Local STT For AMD GPU Owners
AMD GPU? No problem! Dive into local AI adventures like on-device speech to text.