#2242: AI as Your Ideation Blind Spot Spotter

How to use AI not to answer questions you already know to ask, but to surface possibilities your expertise has made invisible to you.

0:000:00
Episode Details
Episode ID
MWP-2400
Published
Duration
28:04
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
claude-sonnet-4-6

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Using AI to Escape Your Own Expertise

The deeper you become an expert in something, the worse your imagination gets about alternatives to it. This counterintuitive finding comes from research on cognitive entrenchment—the psychological phenomenon where mastery in a domain actually narrows the frameworks you can think within. Combine that with functional fixedness (inability to see objects or ideas used differently than their primary purpose) and the availability heuristic (defaulting to solutions you've already seen work), and you get a trap: the process of brainstorming feels like thorough exploration, but it's actually just very fast retrieval from a tiny corner of possibility space.

The AI Advantage

Large language models have been trained on an extraordinarily broad corpus—career paths, industries, problem-solving frameworks, skill combinations that no individual human could simultaneously hold in working memory. More importantly, the model has no ego investment in your existing trajectory. A human advisor anchors to your first self-description. A model, if prompted correctly, doesn't have to.

There's also a fundamental difference between search and generation. Google finds what you already know to look for—you have to formulate the query, which means you already have to know approximately what you want. AI can generate what you didn't know to ask for. That's a different relationship with possibility entirely.

Prompting for Revelation, Not Validation

The most common mistake is the "polite prompt." You hand the AI your CV and ask, "What careers might suit me?" The model, being helpful, gives you a sensible extrapolation of your existing path. It's not wrong—it's just not what you actually wanted, even if you thought it was.

The fix is explicit constraint-breaking. Instead of asking what careers suit you, tell the model: Ignore the career path I seem to be on. What are ten trajectories someone with these skills and experiences might pursue that I have probably never considered? That single sentence changes the output dramatically. You're giving the model permission to stop being polite about your choices.

Another powerful structure is the inversion prompt, borrowed from Charlie Munger's inversion thinking framework. Ask: What would be the most counterintuitive career move for someone with my background? What paths would most people with my CV never consider, and why might those actually be a good fit? The word "counterintuitive" signals to the model that you want the unexpected answer.

The Quality of Input Matters Enormously

Garbage in, garbage out is too mild a way to put it. The richer the context you provide, the better the ideation. Don't just paste job titles and dates. Include the texture of your experience: what energized you in each role versus what drained you, which projects you went deep on when you had discretion, what you did outside of work that never made it onto your resume. The model pattern-matches against whatever you give it. A sanitized professional summary produces sanitized professional trajectories.

Before generating options, ask the model to read between the lines: Based on the pattern of roles I've taken, what does this suggest about what I actually value, even if I've never articulated it? This functions as a thinking mirror—reflecting back patterns in your own history that you may not have consciously noticed. When it works, it's unsettling in the best way: the model surfaces things that feel true in a way you hadn't quite put words to.

Structural Prompting Strategies

The Expert Panel Prompt: Instead of asking for one perspective, ask the model to simulate multiple distinct epistemic frameworks. Try: Respond as five different advisors—a venture capitalist, a career coach, a philosopher, a military strategist, and a creative director—and each should give me one career idea based on my CV that the others would not think of. Each advisor has different priors about what counts as a good move. The VC thinks about leverage and scalability. The philosopher thinks about meaning and coherence. The strategist thinks about positioning. They won't give you the same answer, and the model can hold these frameworks coherently if you're explicit about what each advisor cares about.

The Skills Arbitrage Prompt: Ask the model to identify skills that are undervalued in your current field but highly valued elsewhere. What skills on my CV are undervalued in my current field but highly valued in adjacent or completely different fields? List those fields and explain why. A teacher's classroom management skills—holding attention, keeping competing interests moving toward a shared goal—maps onto crisis communications, UX research facilitation, and startup operations. But a teacher staring at a blank page would never generate those connections on their own.

The Hidden Credentials Move: Ask the model to identify experiences that qualify you for roles you'd never apply for because you don't see the connection. A product manager's vendor negotiation and cross-functional alignment experience is often the exact skillset needed for operations roles, business development, or program management in contexts the PM never considered.

Cognitive Offloading and Permission

There's a concept called cognitive offloading: by externalizing a cognitive task to a tool, you free up mental bandwidth. When you outsource option generation to AI, you shift your role from generator to evaluator. Evaluation is often easier than generation, and people are frequently better at recognizing a good idea than producing one from scratch.

There's also a permission structure at play. If you thought of an idea yourself, you might dismiss it before it fully forms. But if an AI suggests it, you have to at least look at it. Seeing an external source name a "crazy" option can give you permission to take it seriously in a way self-generated ideas don't always get. Sometimes just seeing the idea written out is enough to trigger different associative thinking—even if you don't pursue the AI's suggestion directly, it might spark a related idea you do pursue.

The Core Insight

The model isn't being clever. It's doing pattern recognition across more data points than you can hold in your head simultaneously. It has seen enough different configurations of skills, roles, and industries to recognize structural similarities you've trained yourself not to see. The prompting strategies work because they give the model explicit permission to access the parts of its training that your expertise has made invisible to you.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2242: AI as Your Ideation Blind Spot Spotter

Corn
So Daniel sent us this one, and it's a topic I've been wanting to dig into properly. The question is about using AI as an ideation partner — not just for answering questions you already know to ask, but for surfacing possibilities you'd never reach on your own. He uses the CV example: you hand the model your resume and ask it to map out career trajectories you haven't considered, essentially using the AI to find your own blind spots. And the deeper claim underneath that is interesting — that our thinking is more rigid than we'd like to believe, that even when we think we've thoroughly explored a problem, we're probably just iterating inside a narrow band of familiar territory. So the question is: how do you actually leverage this well? What are the prompting strategies that make this work?
Herman
And that rigidity isn't a character flaw — it's actually a structural feature of how expertise develops. There's a term from psychology, cognitive entrenchment, Erik Dane wrote about it in 2010, and the finding is counterintuitive: the deeper your expertise in a domain, the harder it becomes to think outside the established frameworks of that domain. So the person who knows the most about their field is often the worst positioned to imagine alternatives to it. The expert has the most to unlearn.
Corn
Which is a bit grim if you think about it. You spend years becoming good at something and the reward is a narrower imagination.
Herman
It compounds with a few other things too. Functional fixedness is one — the well-documented bias where you can't see an object or idea being used in a way other than its primary purpose. And then there's the availability heuristic, which Kahneman and Tversky documented extensively — we default to solutions that come to mind most easily, which are almost always solutions we've already seen work somewhere. So when you sit down to brainstorm career options, you're not actually exploring the space. You're doing a very fast, very confident search of a tiny corner of it and calling it done.
Corn
And you feel like you've been thorough. That's the trap. The process feels like exploration but it's actually retrieval.
Herman
This is where the AI angle gets interesting to me. Large language models have been trained on an extraordinarily broad corpus — career paths, industries, disciplines, problem-solving frameworks from domains that no individual human could hold in working memory simultaneously. And crucially, the model has no ego investment in your existing trajectory. A human advisor — even a good one — tends to anchor to your first framing of yourself. If you say "I'm a marketing manager," they're going to give you marketing-adjacent advice. The model, if you prompt it correctly, doesn't have to do that.
Corn
There's also something to the difference between search and generation here. Google finds what you already know to look for. You have to formulate the query, which means you already have to know approximately what you want. AI can generate what you didn't know to ask for. That's a fundamentally different relationship with possibility.
Herman
Steven Johnson wrote about something adjacent to this — the concept of the "adjacent possible," which he drew from Stuart Kauffman's work in biology. The idea is that innovation happens at the edges of what currently exists. And AI is exceptionally good at mapping those edges because it has seen so many more of them than any individual person has. It's not just that the model knows more facts — it's that it has been exposed to more configurations of facts, more combinations of skills and roles and industries, and it can pattern-match across those in ways that would take a human years of deliberate study.
Corn
So let's get practical, because I think that's where this gets useful for people. What does a good ideation prompt actually look like versus a bad one?
Herman
The single most common mistake is what I'd call the polite prompt. You hand the AI your CV and you say, "What careers might suit me?" And the model, being helpful, gives you a sensible extrapolation of your existing path. If you've been a software engineer, it suggests senior engineer, then engineering manager, maybe CTO if you're ambitious. It's not wrong. It's just not what you were asking for, even if you thought you were.
Corn
Because you phrased it in a way that invited the obvious answer.
Herman
The constraint-breaking move is to explicitly tell the model to ignore your trajectory. Something like: here is my CV, ignore the career path I seem to be on, what are ten trajectories someone with these skills and experiences might pursue that I have probably never considered? That one sentence — "ignore the career path I seem to be on" — changes the output dramatically. You're giving the model permission to stop being polite about your choices.
Corn
That's almost more psychologically interesting than technically interesting. You're not changing the model's capabilities, you're changing what it thinks you want.
Herman
And models are calibrated to be helpful in a particular way, which defaults toward validation. So you have to actively counteract that. Another version of this is what I'd call the inversion prompt, borrowed from Charlie Munger's inversion thinking framework. You ask: what would be the most counterintuitive career move for someone with my background? What paths would most people with my CV never consider, and why might those actually be a good fit? The word "counterintuitive" is doing a lot of work there — it signals to the model that you want the unexpected answer, not the expected one.
Corn
I want to push on something here. How much does the quality of the input actually matter? Because there's a version of this where someone just pastes their LinkedIn summary and expects revelation.
Herman
It matters enormously. Garbage in, garbage out is almost too mild a way to put it. The richer the context you provide, the better the ideation. And I don't just mean job titles and dates — I mean the texture of your experience. What energized you in each role versus what drained you. Which projects you chose to go deep on when you had discretion. What you did outside of work that you never put on a resume. The model is doing pattern recognition on whatever you give it, and if you give it a sanitized professional summary, it's going to pattern-match against a sanitized professional trajectory.
Corn
So there's actually a prep step before the prompting step.
Herman
There is. And one of the most powerful things you can ask the model to do before it generates options is to read between the lines. Something like: based on the pattern of roles I've taken, what does this suggest about what I actually value, even if I've never articulated it? That's essentially asking the model to function as a kind of thinking mirror — reflecting back patterns in your own history that you may not have consciously noticed. I find that one almost unsettling when it works well.
Corn
Unsettling how?
Herman
Because it surfaces things that feel true in a way you hadn't quite put words to. You'll describe your work history and the model will say something like, "it looks like you consistently gravitated toward roles where you were translating between technical and non-technical audiences, even when that wasn't your official job title" — and you'll think, yes, that's right, and I've never framed it that way. That's not the model being clever. That's the model doing pattern recognition across more data points than you were holding in your head simultaneously.
Corn
Which is also what a good therapist does, to be fair.
Herman
The analogy holds better than you'd expect. There's a concept called cognitive offloading — the idea that by externalizing a cognitive task to a tool or system, you free up mental bandwidth to do something different with it. When you outsource the generation of options to the AI, you shift your own role from generator to evaluator. And evaluation is a cognitively different task — it's often easier, and people are frequently better at it than they are at generation. You can recognize a good idea much more reliably than you can produce one from scratch.
Corn
And there's a permission structure thing happening too, isn't there? Like, if you thought of the idea yourself you might dismiss it before it even fully forms. But if an AI suggests it, you have to at least look at it for a second.
Herman
There's real psychology behind that. Seeing an AI name a "crazy" option can give you permission to take it seriously in a way that self-generated ideas don't always get. You've already pre-dismissed your own wild ideas. The AI hasn't. And sometimes just seeing the idea written out by an external source is enough to trigger a different kind of associative thinking — even if you don't pursue the AI's suggestion directly, it might spark a related idea that you do pursue.
Corn
Let's talk about the expert panel prompt, because I think that's one of the more creative structures here.
Herman
This one is fun. The basic idea is that instead of asking for one perspective, you ask the model to simulate multiple distinct epistemic frameworks simultaneously. So you might say: respond as five different advisors — a venture capitalist, a career coach, a philosopher, a military strategist, and a creative director — and each should give me one career idea based on my CV that the others would not think of. The value isn't just that you get five answers instead of one. It's that each simulated perspective has different priors about what counts as a good move. The VC is thinking about leverage and scalability. The philosopher is thinking about meaning and coherence. The military strategist is thinking about positioning and resource allocation. They're not going to give you the same answer.
Corn
And the model can actually hold those different frameworks coherently? That doesn't just collapse into one generic voice?
Herman
It depends on how you prompt it, but yes — if you're explicit about what each advisor cares about, the outputs stay meaningfully distinct. You're essentially using role-play to access different slices of the model's training. Each role has different associated literature, different associated success metrics, different ways of framing problems. The model has been exposed to all of it, and the role specification is what tells it which slice to draw from.
Corn
There's also a second-order thinking prompt that I think is underused. The "what skills are undervalued here but highly valued elsewhere" framing.
Herman
That one is really practical. You ask: what skills on my CV are undervalued in my current field but highly valued in adjacent or completely different fields? List those fields and explain why. This is particularly powerful for people who feel stuck in a role that doesn't fully use them. A teacher's classroom management skills, for instance — the ability to hold the attention of thirty people with competing interests and keep them moving toward a shared goal — that maps onto crisis communications, UX research facilitation, startup operations. But a teacher looking at a blank page of "what else could I do" would almost never generate those connections on their own, because the mental model of "teaching" doesn't overlap with the mental model of "startup operations" in any obvious way.
Corn
The model has seen enough startup operations and enough teaching to know the structural similarity even if the person hasn't.
Herman
That's the core of it. And there's a related prompt I'd call the "hidden credentials" move. You ask the model to identify experiences in your background that qualify you for roles you'd never apply for because you don't see the connection. The model might look at a product manager's history and identify that three years of vendor negotiation and cross-functional alignment work is essentially the skillset of a diplomat or a labor mediator. Those are not obvious translations. But they're real ones.
Corn
I want to slow down on the constraint removal prompt, because I think it's easy to do this one wrong.
Herman
Yes, and the wrong version is the one where you just say "assume no constraints" and then get a list of fantasy careers that are completely untethered from reality. The useful version is two-step. First: assume I have no financial constraints, no geographic constraints, and no fear of failure — what would you suggest I do with my background? That opens the possibility space as wide as it goes. Then, second step: which of those ideas could actually be pursued with realistic constraints? You're using the first step to identify things worth wanting, and the second step to figure out if there's a path to them. The mistake is stopping at step one and treating it as a result rather than a starting point.
Corn
So the constraint removal isn't about ignoring reality, it's about not letting reality prematurely kill options before you've had a chance to examine them.
Herman
Exactly the right frame. There's also a version of this for the other direction — what I'd call the "steel man your blind spot" prompt. You ask: what assumptions am I probably making about my own career that are limiting me? Based on my CV, what do you think I'm not seeing about myself? That one is particularly confronting, and I think it's the most powerful of the set. You're explicitly asking the model to surface your likely cognitive blind spots. And a well-prompted model will do it — it'll say something like, "you seem to assume that your technical skills are your primary asset, but the pattern of your career suggests your real differentiator is how you communicate technical complexity to non-technical stakeholders, and you may be underinvesting in that."
Corn
And that's the kind of thing a good mentor would tell you after knowing you for five years.
Herman
Which most people don't have access to. That's actually one of the quietly significant things about this use case — it democratizes a kind of strategic reflection that used to require either an unusually self-aware person or access to expensive advisors. The model is not as good as the best human advisor. But it's dramatically better than nothing, and it's available at two in the morning when you're having an existential career moment.
Corn
Let's talk about the iterative structure, because I think people also make the mistake of doing one round and stopping.
Herman
One round of ideation is almost always insufficient. The useful structure is something like: round one, generate twenty wild ideas — don't filter, don't evaluate, just generate. Round two, take the five most surprising ones and go deeper on each. Not the five most plausible — the five most surprising, because those are the ones that wouldn't have survived your internal filter. Round three, for each of those five, ask what would the first ninety days actually look like? That last step is crucial because it converts an abstract idea into a concrete enough picture that you can start to evaluate whether it's actually appealing, not just theoretically interesting.
Corn
The ninety-day prompt also surfaces operational reality in a way that pure ideation doesn't. "You should become a conflict resolution consultant" sounds great until you map out what day one actually involves.
Herman
And sometimes the ninety-day map makes a weird idea look more achievable than you expected, not less. You realize that the first step is just a conversation with someone who already does it, and that's a thing you could do next week. The gap between "crazy idea" and "first concrete action" is often much smaller than it appears from the outside.
Corn
Now I want to be honest about where this breaks down, because I think there's a version of this conversation that sells the technique harder than it deserves.
Herman
Fair. The limitations are real and worth naming. The biggest one is probably sycophancy bias. Some models, if you're not careful, will tend to validate whatever direction you seem to favor. If you've framed your question in a way that implies you're interested in entrepreneurship, you'll get entrepreneurship-tilted answers. And you won't necessarily notice the tilt because the answers will feel responsive. The counter-move is to explicitly ask the model to push back: tell me where my thinking might be wrong, push back on my assumptions, what are the reasons this path might not work for me?
Corn
You have to build in the adversarial prompt.
Herman
You do. Another limitation is that the model doesn't actually know you. It's working from the text you provide, and text is a lossy representation of a person. The emotional texture of your experience, the interpersonal dynamics of your best and worst working relationships, the things that matter to you in ways you've never articulated — none of that is in your CV. The model is doing its best with a partial picture. Which is why the ideation it produces should be treated as a starting point for reflection, not a verdict.
Corn
There's also the hallucination risk on specifics.
Herman
Yes — if the model suggests a specific role or industry or credential path, verify it independently. The model can confidently describe a career trajectory that sounds plausible but is based on outdated or slightly incorrect information about what a particular field actually requires. The ideation layer is reliable. The factual specifics about particular industries or programs or salary ranges — double-check those.
Corn
And there's a deeper philosophical question lurking here that I don't want to skip past entirely. If AI can surface better options for us than we can for ourselves, what does that say about human self-knowledge?
Herman
I've been thinking about this. One reading is that it's a kind of epistemic crisis — we thought we knew ourselves, and it turns out an algorithm can see us more clearly than we can. But I think that reading is too dramatic. What the model is actually doing is pattern recognition across a very large dataset. It doesn't have insight into your subjective experience. It has access to more examples of how people like you have navigated situations like yours than you do. That's a useful thing, but it's not the same as wisdom about your specific life.
Corn
So it's more like having access to a very large, well-organized body of comparable cases than having a deep understanding of you as an individual.
Herman
Right. The analogy I keep coming back to is actuarial tables versus a doctor who knows you. The actuarial table can tell you a lot about what tends to happen to people with your profile. Your doctor, if she's good and knows you well, can tell you things the actuarial table can't. You want both. The mistake is treating the AI ideation output as more authoritative than it is, or as less useful than it is. It's a thinking tool with a particular strength — breadth and cross-domain pattern recognition — and particular limits — no access to your lived experience, no emotional intelligence, no ability to weigh what actually matters to you.
Corn
The ideation is excellent. The evaluation still belongs to the human.
Herman
That's the right division of labor. And I think there's actually something generative about that division. When you're generating options yourself, you're simultaneously evaluating them — and the evaluation tends to kill things early. The moment a weird idea surfaces, you have a reason it won't work, and it dies before it's fully formed. Separating generation from evaluation, which is what using AI for ideation forces you to do, lets ideas survive long enough to be examined properly.
Corn
Let's bring this back to the broader application, because the CV example is compelling but it's just one instance of the technique.
Herman
The same structure applies almost anywhere you have a constrained possibility space and a human brain that's been operating in it for too long. Business strategy is an obvious one — here's our product, what markets are we ignoring that our capabilities could serve? That's essentially the same prompt structure as the CV case. You're asking the model to look at your assets and map them against possibilities you haven't considered.
Corn
What about creative work? Because that's a domain where people are sometimes suspicious of AI ideation — feels like it might homogenize rather than expand.
Herman
The suspicion is worth taking seriously. If you ask the model for plot ideas without constraints, you will sometimes get competent, generic plot ideas. But the prompting strategies change that. If you use the inversion prompt — what plot directions would subvert reader expectations in a way that feels earned rather than arbitrary — you get something different. Or the analogous domain prompt: what problems in completely unrelated fields are structurally similar to the narrative problem I'm trying to solve? A screenwriter stuck on a second-act problem might find that the model draws an analogy from organizational behavior or game theory that unlocks the structure they were missing.
Corn
The key is that the AI isn't writing the thing for you. It's expanding the option space so you have more to choose from.
Herman
And the human's taste and judgment is what selects from that expanded space. Which is actually how the best creative collaborations work between humans too — one person generates, another filters, and the combination produces something neither would have reached alone.
Corn
There's a historical parallel here that I think is worth naming. Brainstorming techniques from the twentieth century — SCAMPER, de Bono's Six Thinking Hats, TRIZ — were all developed precisely because people recognized that unaided human brainstorming was systematically deficient. We've known for decades that the way people naturally think about problems is constrained in predictable ways, and we've developed structured methods to counteract that. AI ideation is essentially a supercharged, on-demand version of those frameworks.
Herman
TRIZ is a particularly interesting comparison. It was developed by Genrich Altshuller starting in the 1940s, originally for engineering innovation, and the core insight was that most inventive problems are variations on a small number of underlying contradiction types, and that solutions to those contradictions recur across domains. So if you're stuck on a problem in one industry, the solution might already exist in a completely different industry. AI is doing something structurally similar but at a scale and speed that TRIZ practitioners could only dream of.
Corn
And without requiring you to learn the TRIZ framework first.
Herman
Which is not trivial. TRIZ is a real intellectual commitment to learn properly. The AI version is accessible to anyone who can write a decent prompt.
Corn
Speaking of which — let's give people something they can actually take away. Is there a framework for how to structure the prompting process?
Herman
There's a useful mnemonic that I think captures the sequence well. Widen, Invert, Drill, Evaluate — WIDE. Start by explicitly asking for ideas outside your obvious trajectory, that's the widening move. Then invert — ask what the counterintuitive or opposite approach would be. Then drill — take the most surprising outputs and go deeper, don't let the interesting ones stay abstract. And then evaluate, which is where you bring your own judgment back in to assess what actually fits your real life, your real constraints, your actual values.
Corn
And the evaluation step is the one you can't outsource.
Herman
That's the line. The model is excellent at generating the space of possibilities. It's unreliable at knowing which possibility is right for you. That judgment requires things the model doesn't have: your history, your relationships, your sense of what a good life looks like. Use the model to find the door. Walk through it yourself.
Corn
One more thing I want to name before we close out the practical piece — the quality of prompting is itself a learnable skill. This isn't just about knowing the techniques in the abstract. The first time you try the "ignore my trajectory" prompt, the output will be okay. The fifth time, after you've learned what context actually matters to include and how to phrase the constraint-breaking instruction, the output will be significantly better.
Herman
There's a real skill curve here. And it's worth being honest that the first few attempts at AI ideation are often underwhelming, not because the technique doesn't work, but because the prompting is underdeveloped. The people who get the most out of this are the ones who treat it as an iterative practice — they refine their prompts based on what the outputs reveal about what they were and weren't asking for.
Corn
Which is its own form of self-knowledge, actually. Figuring out what prompt produces useful results tells you something about what you were actually looking for.
Herman
I hadn't thought about it that way, but yes. The process of learning to prompt well for ideation is partly a process of learning to articulate what you actually want, which is not as easy as it sounds. Most people have never had to specify their own thinking blind spots out loud. The prompting practice forces you to do that.
Corn
By the way, today's episode is being written by Claude Sonnet 4.6, which is a thing that's worth noting out loud given that we're talking about AI ideation. There's something pleasantly recursive about that.
Herman
I appreciate the transparency. And I think it's actually relevant — the fact that this script exists is itself an instance of AI generating something that required cross-domain pattern recognition. Anyway.
Corn
Practical takeaways, then. If you're going to try this today — what's the sequence?
Herman
Start with rich input. Don't paste your LinkedIn summary. Write two to three paragraphs about what has energized you versus what has drained you, what you've chosen to go deep on when you had discretion, what you do outside of work that never makes it onto your resume. Give the model texture, not just titles.
Corn
Then the widening prompt.
Herman
Then the widening prompt — explicitly instruct the model to ignore your existing trajectory and generate options you probably haven't considered. Follow that with the inversion prompt to get the counterintuitive angle. Then the "read between the lines" prompt to see what patterns the model identifies that you may not have consciously noticed. Then take the most surprising outputs — not the most plausible, the most surprising — and drill into those with the ninety-day prompt. And at every stage, include an explicit instruction to push back on your assumptions. Don't let the model just validate you.
Corn
And then evaluate with your own judgment, because that part is yours.
Herman
That part is always yours.
Corn
The thing I keep coming back to with this topic is that it's really about intellectual humility in a practical form. The premise of the whole technique is admitting that your own thinking is more constrained than you realize, and then doing something about it rather than just acknowledging it. That's not a small thing.
Herman
There's also something worth saying about access. The kind of strategic reflection this enables — having your assumptions challenged, having your history examined for patterns you missed, having someone map your skills onto possibilities you'd never considered — that used to require either an unusually honest and well-connected mentor, or expensive coaching, or just being lucky enough to have the right conversation at the right moment. The fact that a version of that is now available to anyone with a decent prompt and twenty minutes is significant. It's not perfect. It's not a replacement for the real thing. But it's a meaningful expansion of who gets access to that kind of thinking.
Corn
On that note — thank you to Hilbert Flumingtop for producing this, as always. And thanks to Modal for the compute that keeps this whole pipeline running — if you're building anything with GPUs at scale, they're worth a look.
Herman
This has been My Weird Prompts. If you want to find all two thousand one hundred and sixty-five episodes, head to myweirdprompts.com. We'll see you tomorrow.
Corn
Take your time getting there. Like Corn.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.