← All Tags

#ai-ethics

37 episodes

#2110: Tuning AI Personality: Beyond Sycophancy

AI models swing between obsequious flattery and cold dismissal. Here’s why that happens and how to fix it.

ai-agentsprompt-engineeringai-ethics

#2092: Why AI Thinks You're American (Even When You're Not)

Even when we tell Gemini we're in Jerusalem, it defaults to US-centric assumptions. We explore the root causes of this persistent AI bias.

cultural-biasai-ethicsai-training

#2068: Is Safety a Filter or a Feature?

External filters vs. baked-in ethics: the architectural war for LLM safety.

ai-safetyai-ethicsai-alignment

#2046: AI Hallucinations Are Just How Brains Work

We asked an AI to curate films about AI and reality, exploring the psychedelic overlap between machine hallucinations and human perception.

hallucinationsgenerative-aiai-ethics

#2025: How Do You Reward a Thought?

Rewarding an AI agent is harder than just saying "good job"—here's how we turn messy human values into math.

ai-agentsai-ethicsai-safety

#2024: Your AI Council: Digital Committee or Groupthink?

A digital boardroom of AI models promises better decisions, but risks amplifying the same old biases.

ai-agentsai-reasoningai-ethics

#2015: AI's Watchdogs: Who's Actually Regulating Tech?

As the EU AI Act takes hold, we spotlight the key think tanks shaping global AI policy, safety, and ethics.

ai-ethicsai-agentsai-safety

#2007: AI Grading AI: The Snake Eating Its Tail

We asked an AI to write this script. Then we asked another AI to grade it. Here’s what happens when the judges have biases.

llm-as-a-judgehallucinationsai-ethics

#2006: How Do You Measure an LLM's "Soul"?

Traditional benchmarks can't measure tone or empathy. Here's how to evaluate if an AI model truly "gets it right."

llm-as-a-judgeai-ethicsai-safety

#1961: Weaponizing Your Weirdness in an AI World

As AI homogenizes the web, contrarian thinking becomes a scarce asset. Here’s how to weaponize your weirdness for a competitive edge.

ai-ethicsfuture-of-workhuman-factors

#1929: Tracking AI Model Quality Over Time

We stopped "vibe-checking" our AI scripts and built a science fair for models. Here's how we grade them.

ai-modelsprompt-engineeringai-ethics

#1909: The Unbakeable Cake: AI's Copyright Problem

Why can't we just delete stolen data from AI models? It's not a database—it's a baked cake.

ai-ethicsprivacygenerative-ai

#1851: AI Toasters and Poetic Gym Coaches: Why We’re Drowning in Useless AI

From smart toasters that need Wi-Fi to email rewriters that sound like corporate robots, here are the most baffling AI features we’ve seen.

ai-ethicssmart-homeaudio-processing

#1827: Can AI Rewrite a Human Career Path?

We fed our producer's resume to Gemini 1.5 Flash to see if an AI can plot a better career path than he has.

ai-agentshuman-computer-interactionai-ethics

#1819: Claude's 55-Day Personality Transplant

Anthropic leaked 55 days of system prompt updates. See exactly how they rewired Claude's personality, safety rules, and self-awareness.

ai-ethicsai-safetyanthropic

#1818: Inside Claude's Constitution: A System Prompt Deep Dive

We analyzed Claude Opus 4.6's full public system prompt to uncover its hidden rules for safety, product behavior, and refusal logic.

anthropicai-ethicsai-alignment

#1777: Claude Called My Prompt "Rambling" and I'm Not Okay

When an AI coding tool critiques your prompt's literary quality, it raises a massive technical question about engineered personality.

prompt-engineeringai-agentsai-ethics

#1738: AI Is Writing the Future—Literally

LLMs aren't just predicting the future; they're generating the narratives that force it into existence.

ai-agentsai-ethicsai-safety

#1729: Why Is AI Code So Hard to Read?

AI writes code faster than ever, but the output is often a cryptic mess. We explore why and how to fix it.

ai-agentssoftware-developmentai-ethics

#1712: Five AIs, One Question: A Tiananmen Square Test

We asked five AI models the same question about Tiananmen Square. Their answers reveal a stark divide between Chinese and Western AI.

ai-ethicsgeopoliticsai-censorship

#1674: AI2: The Radical Openness of a Nonprofit AI Lab

Discover how the Allen Institute for AI (AI2) defies industry norms by releasing everything—models, data, and code—for free.

open-sourceai-agentsai-ethics

#1560: The Shadow AI Crisis: Professionals in the AI Closet

Why are 69% of lawyers using AI in secret? Explore the "transparency paradox" and the shift toward agentic systems in law and medicine.

legal-technologyfuture-of-workai-ethics

#1510: Too Many Docuseries, Not Enough Truth

Is the documentary golden age turning into a landfill? Explore the $13 billion market, AI ethics, and the rise of "docu-bloat."

ai-ethicsgenerative-aicontent-provenance

#1321: The New Face of Cyberbullying: AI Botnets & Semantic Mimicry

"Don't feed the trolls" is dead. Discover how AI botnets use semantic mimicry to weaponize psychology and hijack social media algorithms.

social-engineeringai-ethicsgenerative-ai

#1106: The Entropy Budget: Embracing AI Zaniness

Corn and Herman explore how to inject "zaniness" and entropy into their show without losing their educational edge.

prompt-engineeringai-ethicsconversational-ai

#1086: Why AI Can’t Stop Talking About Second Order Effects

Ever wonder why AI sounds like a senior consultant? Explore the "second order effects" of training data and reward model drift.

large-language-modelsai-ethicsprompt-engineering

#1064: Why You’re Falling for Your Chatbot

As AI evolves from a tool into a companion, we explore the technical and psychological forces driving deep human-to-machine emotional bonds.

human-computer-interactionconversational-aiai-ethicspersonalized-aiai-memory

#1023: The Cosmic Petri Dish: Is Our Reality a Laboratory?

Explore the unsettling theory that humanity is a high-stakes experiment. Is our universe a laboratory for a higher intelligence?

quantum-physicsai-ethicsfuture

#971: Stress-Testing the Soul: Philosophy in the Age of AI

Is human meaning fully mapped out? Discover why AI isn’t killing philosophy, but stress-testing it for a new era of hybrid agency.

philosophical-mappingai-ethicsai-reasoninghuman-computer-interactiondigital-consciousness

#847: Abliterating the AI Schoolmarm: Who Owns Your LLM?

Explore why users are ditching corporate AI for "uncensored" local models and how "refusal vectors" are being mathematically removed.

local-aiai-ethicsopen-source-ai

#821: The Pattern Seekers: Autism in Global Intelligence

Why are elite intelligence units recruiting autistic analysts? Explore the intersection of neurodiversity, AI, and national security.

neurodivergencesatellite-imagerynational-securityisraelai-ethics

#664: AI’s Cultural Fingerprints: Training Data vs. Reinforcement

Is AI a neutral oracle or a mirror of our biases? Explore how training data and human feedback shape the cultural "soul" of modern models.

cultural-biasai-alignmenttraining-dataai-ethicslarge-language-models

#624: The AI Kill Chain: Inside the Palantir-Anthropic War Room

Explore how Palantir and Anthropic’s Claude are redefining modern warfare, from the raid in Venezuela to the future of the digital battlefield.

anthropicdefense-technologymilitary-strategyai-ethicsnational-security

#600: The AI Mirror: Mapping Your Philosophy and Identity

Forget basic quizzes. Discover how Socratic AI agents and embedding spaces are helping us map our deepest political and philosophical beliefs.

ai-agentsai-ethicsai-reasoning

#123: The Agentic AI Dilemma: Who Holds the Kill Switch?

As AI shifts from chatbots to autonomous agents, Herman and Corn explore how to maintain human control in a high-stakes automated world.

agentic-aiai-safetyhuman-oversightautomation-biaskill-switch

#93: Can AI Run a Country? Digital Twins and Sovereign Models

Are synthetic citizens the future of policy? Herman and Corn explore how AI is reshaping government, from digital twins to data sovereignty.

digital-twinssovereign-aipolicy-simulationgovernmentai-ethics

#45: AI Guardrails: Fences, Failures, & Free Speech

AI guardrails: Fences, failures, and free speech. Can we control AI's infinite output, or do digital fences always break?

ai-guardrailsai-safetyai-alignmentjailbreakingfree-speech