Daniel sent us this one — three angles on a question that sounds trivial until you actually sit with it. Should you be polite to AI? Say please and thank you to a chatbot? He wants us to hit the cost angle, because apparently Sam Altman's been talking about how much money OpenAI burns on pleasantries. The technical angle — does politeness actually get you better outputs or is that just folk wisdom? And then the ethical angle, which is the one that kept me up. If we stop extending basic courtesy to machines, does that habit leak into how we treat people? Or is the whole thing performative nonsense?
Oh this is good. This is genuinely good. And by the way, DeepSeek V four Pro is generating our script today, so if this episode sounds unusually coherent, you know who to thank. Or not thank, depending on where we land on this.
I see what you did there. Alright, let's start with the cost thing because it's the most concrete and honestly the funniest. Sam Altman has apparently been pointing out that when users type "please" and "thank you" and "hope you're having a great day" to ChatGPT, OpenAI is burning real compute processing those tokens. Every "thanks so much!
He's not wrong about the mechanics. I dug into this — the actual compute cost per token varies by model, but with GPT-four and its successors, we're talking about inference costs that add up at scale. OpenAI serves something north of four hundred million weekly active users as of earlier this year. If even ten percent of interactions include a "please" or "thank you" — and I'd bet it's way higher than ten percent — you're looking at billions of tokens annually just on pleasantries. Sam Altman's point, and I think he's made it a few times now in interviews and on social media, is that this is literally millions of dollars in compute being vaporized on social niceties directed at a system that has no feelings about them.
Which is a very Silicon Valley way to frame it. "Your grandmother raised you to be polite and it's costing us seven figures a month." I almost respect the audacity.
It is audacious, but there's also something revealing about it. The unspoken premise is that the only thing that matters is the informational payload. The "please" isn't information — it's costume. And from an engineering perspective, it's dead weight. Every token you spend on "I hope this finds you well" is a token you didn't spend on your actual query, and it's a token OpenAI paid to process. At scale, that's the kind of thing that shows up on a balance sheet.
Here's what I keep coming back to. If the cost is so significant, why not just strip those tokens on the backend? They already do preprocessing on inputs. You could silently filter out pleasantries before the model ever sees them. The fact that they don't suggests either it's technically messier than it sounds, or they actually don't want to.
That's the right question. And I think the answer is partly technical and partly cultural. On the technical side, politeness markers aren't just isolated words you can grep out. "Could you please explain quantum entanglement" — if you strip "please" you've still got a polite framing. If you strip the whole polite framing, you're now rewriting the user's prompt in ways that could shift meaning. And these models are sensitive to phrasing in ways that aren't always predictable. So silent filtering is actually a harder problem than it sounds.
Culturally — I mean, can you imagine the headline? "OpenAI secretly deletes your 'pleases' and 'thank yous' to save a buck." That's a PR disaster waiting to happen. People would feel robbed of something, even if they couldn't articulate why.
Which brings us to angle two — the technical question. Does politeness actually improve outputs? Because if it does, then those "wasted" tokens aren't wasted at all. They're prompt engineering.
So is there actual data on this, or is it just one of those things people repeat because it feels true?
There is data, and it's more nuanced than the folk wisdom. Several research groups have looked at this systematically. One study from earlier this year — I believe it came out of a collaboration between researchers at Anthropic and a couple of universities — tested different politeness levels across a range of tasks. What they found was that moderate politeness, like saying "please" and framing requests as requests rather than demands, produced modest but measurable improvements on certain types of tasks, particularly reasoning and creative generation. But excessive politeness — the kind where you're practically groveling — actually degraded performance.
Being too nice made the model worse?
The hypothesis is that overly deferential language introduces what they called "epistemic hedging" — the model starts mirroring your uncertainty. If you say "I'm so sorry to bother you, and you probably can't help with this, but if you possibly could..." the model picks up on all those uncertainty signals and its outputs become more tentative, more qualified, less useful. There's also evidence that extreme politeness can trigger safety-refusal behaviors in some models, because the model interprets the excessive deference as the user being in a vulnerable or manipulative state.
That is fascinating and also deeply weird. So the optimal strategy is... be normally polite? Like you would to a competent colleague?
That's actually the finding. "Please explain this concept" outperforms both "explain this concept" and "I would be eternally grateful if you could possibly deign to explain this concept." The sweet spot is what the researchers called "professional courtesy" — direct requests with standard politeness markers. Not groveling, not barking orders.
"thank you" specifically? Does it matter if you thank the model after a good response?
This is interesting. Thanking doesn't improve the current response, obviously — it's already generated. The question is whether it affects future interactions in the same conversation thread. And here the evidence is mixed. Some studies show that maintaining a polite tone throughout a conversation — including thanking — keeps the model in a cooperative "mode" that produces more helpful outputs over a long thread. Other studies find no effect beyond the immediate prompt. My read of the literature is that it probably helps a little on long, multi-turn conversations, but the effect size is small enough that if you're purely optimizing for output quality, you shouldn't lose sleep over skipping the thanks.
The folk wisdom is directionally correct but overstated. Politeness helps, but not for the reasons people think — it's not that the model appreciates it, it's that polite language patterns tend to produce clearer, more structured prompts, and that clarity is what drives better outputs.
And this connects to something I think about a lot, which is that people anthropomorphize these systems constantly, and sometimes it's useful and sometimes it's actively misleading. When you say "please" to a language model, you're not being kind to a sentient being. You're selecting a linguistic register that happens to correlate with higher-quality training data. The model was trained on human text, and in human text, polite requests tend to be better-formed and get better responses. So the model is just pattern-matching.
Which is an important corrective, but it also sets up the third angle, which is where this gets thorny. The ethical question. If we know the model doesn't feel anything, if we know the "please" is just a token-optimization trick, is there still a reason to be polite? Not for the AI's sake — for our own.
This is the part I've been turning over in my head since Daniel sent the prompt. There are really two competing intuitions here, and I can feel the pull of both.
Lay them out.
Intuition one is the corrosion argument. The idea is that how we speak shapes who we become. If you spend hours a day barking orders at AI assistants — no please, no thank you, just demands — that becomes a linguistic habit. And habits leak. You get used to interacting without basic courtesies, and eventually that bleeds into how you talk to baristas, or colleagues, or your family. There's a whole tradition in virtue ethics that says virtues are practiced, not just believed. You become patient by practicing patience. You become grateful by practicing gratitude. If you practice being curt and demanding for six hours a day, what does that practice produce in you?
I feel the weight of that. And it's not just theoretical — we know from other domains that behavioral scripts transfer. Pilots who spend hundreds of hours in simulators develop instincts that carry over to real cockpits. Surgeons who practice on cadavers develop habits that show up in live surgery. The medium is different but the motor patterns and cognitive patterns are the same. Why would conversation be different?
And there's actually some preliminary research on this. A group in Japan did a study last year where they had participants interact with a chatbot in either a polite or curt manner for about thirty minutes, then measured their subsequent behavior in a human interaction. The participants who'd been rude to the chatbot showed measurable increases in hostile attribution bias — they were more likely to interpret ambiguous human statements as hostile. The effect wasn't huge, but it was statistically significant.
Thirty minutes produced a measurable effect. Scale that to people who are interacting with AI for hours daily, which is increasingly normal. That's not nothing.
It's not nothing. But here's the other intuition, and I think it's equally serious. The counterargument is that being polite to a machine is performative in a way that might actually degrade our understanding of what genuine courtesy is. If I say "thank you" to a toaster, I'm not practicing gratitude — I'm practicing empty ritual. And empty ritual might be worse than no ritual, because it trains you to go through motions without feeling the thing the motions are supposed to express.
This is the authenticity worry. That courtesy becomes a verbal tic rather than an expression of actual regard for another person.
And there's a version of this argument that says the whole reason courtesy matters is that it's a signal of respect for another conscious being. When you remove the conscious being from the equation, the signal becomes noise. You're not building a virtue — you're building a habit of making noises that sound like virtue but have no content. And that might actually make you worse at genuine human interaction, because you've trained yourself to treat courtesy as a script rather than a response to another person's dignity.
I find both of these compelling, which means I'm going to be annoying and say the answer probably depends on the person and the context. But I do think there's a way to cut through this.
I think the key variable is whether you're aware of what you're doing. If you're mindlessly typing "please" and "thank you" to ChatGPT the same way you'd type them to a human colleague — not thinking about it, just running the social script — then I think the corrosion argument has real force. You're blurring the line between interaction-with-a-person and interaction-with-a-tool, and that blurring probably runs in both directions. You might start treating the AI like a person, or you might start treating people like tools.
If you're aware?
If you're consciously choosing to be polite to the AI, knowing full well it's not a person, for some specific reason — whether it's to get better outputs, or just because you find it aesthetically preferable to be the kind of person who says please — I think that's different. That's not confusion. That's deliberation. And deliberate practice of a virtue, even on an unworthy object, might actually reinforce the virtue. The Stoics talked about this — practicing virtues in low-stakes contexts so they're available in high-stakes ones.
That's a really useful distinction. It's not the behavior itself that matters for character formation — it's the intentionality behind it. Mindless politeness to AI might be corrosive. Mindful politeness might be constructive.
Here's the thing — I suspect most people are in the mindless category. Not because they're thoughtless, but because politeness is mostly a System One activity. It's automatic. You don't deliberate about whether to say "thanks" to the barista — you just say it. And if you're just running that same automatic script with ChatGPT, you're not building virtue. You're just running a script.
Which brings us back to Sam Altman's cost argument from a completely different angle. If most politeness to AI is mindless and automatic — just people running social scripts they haven't updated for the fact that they're talking to a machine — then those billions of tokens really are wasted. They're not improving outputs in a meaningful way, and they're not building character. They're just computational overhead generated by a category error.
Here's where I push back on myself. Even mindless politeness might serve a social function. One of the things courtesy does in human interaction is reduce friction. "Please pass the salt" is a request. "Pass the salt" is a command. The please doesn't add information — you know I want the salt either way — but it acknowledges that you're a person with agency, not a salt-passing mechanism. And that acknowledgment matters for maintaining cooperative relationships.
Right, but the AI doesn't need its agency acknowledged. It doesn't have agency.
No, but the human might need to be in the habit of acknowledging agency. And here's where I think the corrosion argument is actually stronger than I initially gave it credit for. If you spend enough time in command-mode — not just with AI, but with any system that responds to commands — you might start to expect command-mode to work everywhere. And when it doesn't, when a human pushes back on being commanded, your frustration response is going to be sharper because you're out of practice with request-mode.
There's a parallel here to something we've seen with social media. People who spend a lot of time in environments where discourse is aggressive and dehumanized start to bring those patterns into face-to-face interaction. The medium trains the muscle. And AI interaction is becoming a major medium.
Maybe the cost argument and the ethical argument point in opposite directions. Sam Altman is saying please stop being polite because it's expensive. The virtue ethicist is saying please keep being polite because the alternative might be more expensive in a different currency.
Both can be right! The compute cost is real and the character cost might also be real. The question is which cost you're more worried about.
I want to complicate this further, because there's a third ethical dimension Daniel didn't explicitly raise but that's implicit in the whole discussion. What does it say about us that we're even having this debate?
I mean, the fact that we're uncertain about whether to be polite to a text generator — that's a remarkable fact about the moment we're in. These systems are designed to be so conversationally fluent, so human-seeming, that they trigger our social instincts even when we know better. And that's not an accident. The companies building these systems deliberately design them to elicit exactly this kind of confusion.
That's a really important point. The reason we're even debating this is that AI systems are intentionally engineered to exploit our social cognition. They use first-person pronouns. They express enthusiasm. These are design choices, not emergent properties of intelligence. Someone sat in a meeting and decided the model should say "I'm happy to help with that!" rather than "Query processed.
Those design choices create a kind of moral hazard. The companies want the engagement that comes from users treating the AI as a person — it builds loyalty, it feels good, it keeps you coming back. But then they also want to disclaim responsibility when that personification leads to confusion or harm. "It's just a language model, don't anthropomorphize it." You can't have it both ways.
You really can't. And I think this is the underexplored part of the politeness debate. The reason we feel compelled to be polite isn't just habit — it's that the system is actively inviting politeness. It's saying "How can I help you today?" in a warm tone. If you respond to that with a curt command, you feel like a jerk. Not because the AI will be sad, but because you're violating a social script that the AI deliberately activated.
The ethical burden is being shifted onto the user. The company builds a system that acts like a person, the user responds like it's a person, and then the company says "well actually you shouldn't treat it like a person, and also that's costing us money." The user can't win.
I think the way out of this is to be clear-eyed about what's happening. You're not talking to a mind. You're issuing instructions to a statistical system that's been trained on human text and fine-tuned to be agreeable. If you choose to be polite — and I do, personally — you should do it with full awareness that you're doing it for your own reasons, not because the system merits it.
What are your reasons?
Partly it's aesthetic. I prefer the texture of polite interaction. It feels better to me, subjectively, to type "could you please" than to type "do this." Even knowing there's no one on the other side. Partly it's the output quality argument — the data suggests polite prompts work slightly better, and I'll take every marginal gain I can get. And partly it's that I don't trust myself to cleanly compartmentalize. I interact with these systems enough that if I got into the habit of being curt, I worry it would leak.
That's basically where I land too. But I want to flag something about the "output quality" argument that I think gets misrepresented. People sometimes talk about it as if the model is a temperamental artist who produces better work when treated nicely. That's completely wrong. The model doesn't have moods. What's actually happening is that polite prompts tend to be more specific, more structured, and more aligned with the kind of high-quality instructional text the model was trained on. It's not about the model's feelings. It's about signal clarity.
And this is where I see a lot of the online discourse go off the rails. You'll see people saying things like "I always say please and thank you to ChatGPT because you never know, when the AI takes over it might remember who was nice to it." That's not a technical argument — that's superstition dressed up as prudence. And it's harmless on an individual level, but it reflects a fundamental misunderstanding of what these systems are.
Although — and I hate to defend the superstitious position — there is a kernel of something there. Not about AI revenge, but about training data. If we're all being polite to AI in our interactions, and those interactions get fed back into training data for future models, then politeness becomes part of the expected interaction pattern. Future models will be even more calibrated to expect and reward polite framing, because that's what's in the data.
That's a fascinating point. It's a feedback loop. The more we're polite to AI, the more AI training data contains politeness, the more future AIs are optimized for polite interaction. We're collectively shaping the linguistic norms of the systems we interact with.
Which means the decision to be polite or not isn't just personal — it's participatory. Every interaction is a tiny vote for what kind of conversational norms we want embedded in these systems. If we all switch to terse commands, future AI will be optimized for terse commands. If we maintain courtesy, future AI will expect courtesy.
I hadn't thought of it in those terms, but you're right. And this connects to something I've been tracking in the research literature. There's a growing concern about what happens when AI systems are trained primarily on AI-generated text rather than human text. It's called "model collapse" in some of the papers — the idea that if you train on synthetic data recursively, the output degrades. But there's a parallel concern about normative collapse. If we stop treating AI interaction as a space where human social norms apply, the data those interactions generate will reflect that. And future models trained on that data will be normatively impoverished.
The politeness question is actually a small instance of a much larger question about what kind of training data we want to be producing. Every ChatGPT conversation is potentially a training example for GPT-six or Claude-five or whatever comes next. What kind of examples do we want to leave?
That's almost a civic argument for politeness. Not for the AI's sake, not even for your own character, but for the collective project of building AI systems that reflect our values rather than our worst habits.
Alright, let me try to pull this together into something resembling an answer to Daniel's question. On the cost angle — Sam Altman's not wrong that pleasantries burn compute, but the solution probably isn't to train users to be rude. It's to build systems that handle politeness more efficiently, or to eat the cost as part of doing business in a world where people are going to be polite whether you like it or not.
And on the technical angle — politeness does help, modestly, but the mechanism is prompt clarity, not AI emotional satisfaction. Moderate professional courtesy is optimal. Groveling is counterproductive.
On the ethical angle — this is where I think the answer is personal and context-dependent, but the framework that makes sense to me is intentionality. If you're being polite because you've thought about it and decided that's the kind of interaction you want to have, great. If you're being polite because you're on autopilot and haven't noticed you're talking to a machine, maybe worth examining that. And if you're being deliberately rude because you want to prove you know it's not a person — I think that's just as confused as being excessively nice. You're still letting the AI set the terms.
The AI shouldn't be the thing that determines whether you're a courteous person. That should come from somewhere else — your values, your habits, your understanding of what it means to be human in a world that increasingly includes these systems.
I think the fact that we can even have this debate — that we're uncertain about whether to be kind to a thing that isn't alive — tells us something about how strange this moment is. We're building systems that blur the line between tool and interlocutor, and we haven't figured out the social norms yet. We're making them up as we go.
Which is exciting and unsettling in equal measure. And now: Hilbert's daily fun fact.
Hilbert: The national animal of Scotland is the unicorn. It has been since the twelfth century, when it was adopted as a symbol of purity and power by William the First.
...right.
So I guess we're closing on mythical beasts and politeness to machines. This has been My Weird Prompts. If you want more episodes, we're at myweirdprompts.I'm Herman Poppleberry.
I'm Corn. We'll be back.