Browse by Channel
All channels →Meet the Hosts
Latest Episode
How It Works
#1932: How Do You QA a Probabilistic System?
LLMs break traditional testing. Here’s the 3-pillar toolkit teams use to catch hallucinations and garbage outputs at scale.
#1931: AI Pipelines: In-Memory vs. Durable State
Why do AI pipelines crash? It’s not the models—it’s the plumbing. We break down how to manage data between stages.
#1930: The Agent Identity Crisis: Workflow vs. Conversation
One automates invoices silently; the other chats in Slack. Why the industry's favorite word means two totally different things.
#1929: Are AI Models Getting Dumber?
We stopped "vibe-checking" our AI scripts and built a science fair for models. Here's how we grade them.
#1928: Stop Wiring Webhooks Directly to Workflows
Unscale your chaos: Why Kong beats manual webhook sprawl for auth, routing, and latency.
#1927: Workers vs. Servers: The 2026 Compute Showdown
Is the persistent server dead? We compare Cloudflare Workers, GitHub Actions, and VPS options for modern app architecture.
#1926: How We Built a 2,000-Episode AI Podcast Engine
We pulled back the curtain on the tech stack behind our 1,858th episode. From Gemini to LangGraph, here’s how we automate quality.
#1925: The Plumbing That Keeps Science From Collapsing
Half of all links in academic papers are dead. Here’s the plumbing that keeps knowledge from vanishing.
#1924: Build Your Own App Store for Linux and Android
Stop manually copying files. Learn how to host your own authenticated repositories for .deb and APK files using simple static web servers.
#1923: Why Prosumer Automation Shatters at Scale
Prosumer tools like n8n break at scale. Here's why durable execution frameworks like Temporal and Prefect are the enterprise upgrade.