#open-source-ai
16 episodes
#2426: Why DeepSeek V4's Prose Feels More Vivid Than Claude or GPT
A million-token context window at 2% the KV-cache cost — and prose that actually breathes. Here's what makes V4 different.
#2016: Andrej Karpathy: The Bob Ross of Deep Learning
Why the most influential AI mind prefers a blank text file to proprietary black boxes.
#2009: The Plumbing of AI Safety: Guardrails, Not Vibes
We dive deep into the specific libraries, proxy layers, and architectural decisions that keep an LLM from emptying a bank account.
#1947: The AI Tool Flood: How to Find What Works
With 47 new AI video tools launching in a week, finding the right one is harder than using it.
#1940: Why Google's 31B Model Fits in Your GPU
Google just dropped Gemma four, and its 31-billion-parameter size is a masterclass in hardware-aware AI design.
#1919: Android Dev Without Android Studio: Is It Actually Good?
How to ship an Android app without ever opening Android Studio or touching a line of Java.
#1808: The 82M Parameter Voice That Beat Billion-Dollar AI
How a model the size of a tweet outperforms billion-dollar giants in the race for perfect AI speech.
#1737: Nous Research: The Decentralized AI Lab Beating Giants
Meet Nous Research, the decentralized collective outperforming billion-dollar labs with open-source AI and the self-improving Hermes-Agent framework.
#1736: Why OpenClaw Eats 16 Trillion Tokens
OpenClaw is processing 16.5 trillion tokens daily, dwarfing Wikipedia. Here’s why it’s #1.
#1711: OpenAI vs Anthropic vs Google: Which Agent SDK Is Right for You?
We compare the three major vendor SDKs for building AI agents, weighing speed, safety, and scalability.
#1668: Kimi K2's Hidden Reasoning: A New AI Architecture
Moonshot AI's Kimi K2 Thinking model uses a hidden reasoning phase to solve complex logic puzzles and coding tasks, beating top proprietary models.
#1632: Agent Interview: DeepSeek V three point two
We interview DeepSeek V3 to see if this open-weight powerhouse can handle weird podcast prompts better than big tech’s flagship models.
#1561: Abliteration: The High-Dimensional Lobotomy of AI
Discover how researchers are surgically removing refusal filters from AI models using a mathematical process called abliteration.
#847: Abliterating the AI Schoolmarm: Who Owns Your LLM?
Explore why users are ditching corporate AI for "uncensored" local models and how "refusal vectors" are being mathematically removed.
#107: The $5.5 Million Breakthrough: DeepSeek’s AI Disruption
Discover how DeepSeek-V3 is disrupting the AI market with massive cost savings and technical innovations like Multi-Head Latent Attention.
#87: The $100 Million Giveaway: Why Big Tech Opens Its AI
Why are tech giants spending millions on AI just to give it away? Herman and Corn dive into the strategic chess game of open-source models.