Hardware & GPUs
AMD ROCm, NVIDIA CUDA, AI accelerators, Coral TPU
9 episodes
#346: GPU Scaling: The "Go Wide or Go Tall" Dilemma
Should you use a fleet of cheap GPUs or one powerhouse? Learn the math behind serverless GPU costs, cold starts, and batching efficiency.
#110: Building the Ultimate Local AI Inference Server
Learn how to build a high-performance local AI server for agentic coding, from dual-GPU PC builds to the power of Mac's unified memory.
#82: Why GPUs Are the Kings of the AI Revolution
From video game dragons to digital brains: Herman and Corn explain why your graphics card is the secret engine behind the AI boom.
#75: The Future of Local AI: Stable Diffusion vs. The New Guard
Is Stable Diffusion becoming a relic? Corn and Herman debate the rise of Flux, the privacy of local AI, and the future of open-source generation.
#55: Running Video AI at Home: The Real Technical Challenge
Video AI: Hype vs. Reality. Can your GPU handle it? We dive into the technical challenges of running video AI at home.
#38: AI Supercomputers: On Your Desk, Not Just The Cloud
AI supercomputers are landing on your desk! Discover why local AI is indispensable for enterprises facing API costs, latency, and privacy.
#34: Red Team vs. Green: Local AI Hardware Wars
NVIDIA's CUDA rules AI, leaving AMD users battling a "green wall." Explore the hardware wars and thorny paths forward.
#25: GPU Brains: CUDA, ROCm, & The AI Software Stack
Unraveling how GPUs power AI. We dive into CUDA, ROCm, and the software stack that makes it all think.
#2: Local STT For AMD GPU Owners
AMD GPU? No problem! Dive into local AI adventures like on-device speech to text.