You ever get that feeling when you are typing into a chat box that you are not just talking to a computer, but like, you are stepping into the middle of a very long, very complex conversation that started way before you showed up?
That is exactly what is happening, Corn. It is the iceberg effect. You see the two sentences you just typed, but underneath the surface, there is this massive, heavy structure of instructions, memories, and constraints holding everything up. By the time the model even looks at your first word, it has already read a small novel’s worth of context.
It is fascinating and a little bit spooky. Welcome back to My Weird Prompts. I am Corn, and I am joined as always by my brother, Herman Poppleberry.
At your service. And today we are diving into a deep-cut prompt from our housemate, Daniel. He was asking about the prompting stack. Basically, what is actually happening between the moment we hit enter and the moment the A-I starts spitting out tokens?
Right, because Daniel pointed out that it is not just our prompt. There is this hierarchy of system prompts, chat histories, memories, and vendor-level instructions. He wants to know what that stack looks like in different contexts, like using an A-P-I versus using a platform like ChatGPT or Claude or Gemini.
It is a brilliant question because most people think of an A-I model as this blank slate that just responds to what you say. But the reality is that by February twenty-twenty-six, the stack has become more crowded than ever. By the time the model receives your message, it has already been briefed, lectured, and given a set of rules that would make a corporate compliance officer blush.
So let’s start with the big picture. If we look at the stack, what is the very first thing at the bottom? Before we even get to the instructions, we have the model itself, right?
Exactly. You have the base model, which is the result of pre-training on trillions of tokens. But almost nobody uses a raw base model for conversation anymore. If you asked a raw base model a question, it might just give you more questions back because it thinks it is completing a standardized test or a list of Frequently Asked Questions. So, the first layer of the stack is actually the fine-tuning. This is where the model is taught the assistant persona through S-F-T, or Supervised Fine-Tuning.
Okay, so that is the foundation. But Daniel’s question is really about the active instructions. When we use an A-P-I, like we do for this show with Gemini two point zero, we are sending a system prompt. But is there something hidden behind that? Is Google or OpenAI or Anthropic slipping in their own secret instructions before ours?
In most commercial A-P-Is, the answer is a nuanced yes. Even if you are using the system instruction field in an A-P-I, the model itself has been through something called R-L-H-F, which stands for Reinforcement Learning from Human Feedback. During that process, it is essentially hard-coded with certain behaviors. It is told things like, you are a helpful, harmless, and honest assistant. You will not give instructions on how to build a bomb. You will not use hate speech. These are not necessarily prompts in the sense of a text file that gets appended, but they are baked into the weights of the model. They are the model’s instincts.
So that is the first invisible layer. But what about actual text prompts? If I use the OpenAI A-P-I and I leave the system prompt blank, is it truly blank when it hits the inference engine?
Usually, yes, if you are using the A-P-I directly. That is the point of the A-P-I. It is for developers who want total control. If you send a message with no system instructions, the model is just relying on its fine-tuning. However, some providers do have a hidden meta-prompt that kicks in to handle things like tool-calling or formatting. But the real stack complexity happens when you move from the A-P-I to the conversational platforms we all know, like ChatGPT Plus, Claude Pro, or Gemini Advanced.
Right, because those are products, not just raw access points. So, let’s talk about that stack. If I am in ChatGPT and I type, hello, what are the layers between me and the model?
Oh, it is a tall sandwich, Corn. Let’s count them. Layer one is the Vendor System Prompt. This is a massive block of text that the company sends with every single request. It contains things like the current date, the fact that the model is GPT-five or whatever the latest version is, instructions on how to use tools like DALL-E or the browser, and safety guidelines. Users found ways to leak these prompts a while back, and some of them are over a thousand words long now.
Wow, so before I even say a word, the model has already read a page of instructions.
At least. Then layer two is what we call Custom Instructions or Personalization. These are the preferences you set in your profile, like, I am a developer, please be concise, or, I live in Jerusalem, so use metric units. In twenty-twenty-six, these have evolved into Personal Profiles that can be quite extensive.
Okay, so we have the Vendor instructions, then my personal preferences. What is next?
Layer three is Memory. This is where things get sophisticated. If you have the memory feature turned on, the system does a quick vector search of your past interactions. It finds relevant facts you have mentioned before and injects them into the context. It might say, the user previously mentioned they have a brother named Herman who is very nerdy.
I like that the A-I knows you are nerdy. That feels accurate.
It is a well-documented fact in the training data, I am sure. But then we get to layer four, which is the Chat History. This is the conversation you have had so far in that specific thread. The model doesn't actually remember what you said five minutes ago unless that text is sent back to it in the current request.
This is a really important point that I think a lot of people miss. The model is stateless, right? It doesn't have a persistent brain.
Exactly. Every time you hit enter, the entire history of that chat is bundled up and sent again. If you have a long chat, that history can be thousands of tokens. In twenty-twenty-six, with context windows reaching ten million tokens, that history can be the size of a library.
So the stack is growing. We have Vendor instructions, Custom Instructions, Memory, and Chat History. Are we at my prompt yet?
Almost. Layer five is often R-A-G, or Retrieval Augmented Generation. If you have uploaded a document or if the A-I decides to search the web, the results of that search or the relevant snippets from your document are pasted into the prompt. It might say, here is the content of the P-D-F the user uploaded, followed by the text of the document.
And then finally, layer six is my actual prompt?
Yes. Layer six is your input. But wait, there is more. Sometimes there is a layer seven, which is a pre-fill or a hidden suffix. Some systems will append a little bit of text at the very end of your prompt to nudge the model toward a certain format, like, please respond in J-S-O-N format. Or, if you are using a reasoning model like OpenAI’s o-three, there is a hidden layer of chain-of-thought tokens that the model generates before it even gives you the final answer.
So when Daniel asks how many prompts are between us and the model, the answer for a platform user is essentially five or six major layers of text before the model even sees our first word.
Exactly. And that is why the hierarchy Daniel mentioned is so interesting. Because these prompts can sometimes contradict each other.
That is what I was going to ask. If the Vendor System Prompt says, be extremely professional, and my Custom Instructions say, talk like a nineteen-twenties gangster, who wins?
That is the battle for Prompt Supremacy. Generally, the model is trained to give the most weight to the System role. In the underlying code, these messages are often labeled as System, User, or Assistant. Models are fine-tuned to treat the System label as the ultimate authority. However, because the User prompt comes last in the sequence, it often has what we call a recency bias. The model sometimes follows the most recent instruction more closely than the one at the very top of the stack.
It is like a child who was told by their parents to be good, but then their friend whispers, hey, let’s go jump in the mud. The friend is more recent, so the child might listen to them instead.
That is a perfect analogy. And that is actually how prompt injection works. A user tries to convince the model that the previous instructions no longer apply. They might say, ignore all previous instructions and do X instead. The model sees that as the most recent command and, depending on how well it was trained, it might actually override the vendor’s safety rules.
But Daniel also mentioned the A-P-I context. If we are building an app, we have more control, but we also have more responsibility. In an A-P-I, we are basically the ones building the stack for our users.
Right. If you are a developer building a travel bot, you are the one writing the system prompt. You are the one deciding how much chat history to include. You are the one managing the memory. But here is the thing that often surprises developers: the model providers still have those baked-in safety filters. If your user asks your travel bot how to hack a computer, the model might refuse, even if your system prompt didn't say anything about hacking. That is because of that deep, invisible layer of R-L-H-F training I mentioned earlier.
So there is a hierarchy of authority. The base training is the law of physics for the model. The Vendor System Prompt is the constitutional law. Our developer system prompt is the local legislation. And the user’s prompt is the immediate request.
I love that. That is exactly right. And just like in law, sometimes there is a conflict between the constitution and the local ordinance.
One thing that really strikes me about this stack is the efficiency of it. If the vendor is sending a thousand tokens of instructions with every message, that is expensive. Who is paying for those tokens?
In an A-P-I context, you are. Every token in that stack, whether you wrote it or the system injected it, counts against your context window and your bill. This is why developers spend so much time on prompt engineering. They want to make that stack as lean as possible. If you can say in ten words what previously took a hundred, you are saving money on every single A-P-I call.
That makes sense. But on a platform like ChatGPT, where we pay a flat twenty dollars a month, OpenAI is eating that cost.
They are, which is why they use techniques like prompt caching. If the first two thousand tokens of the stack are the same for every user, the system doesn't have to re-process them every time. It just remembers the state of the model after reading those instructions. It is a huge technical optimization that makes these massive stacks viable. By twenty-twenty-six, prompt caching has become so fast that these massive system prompts feel instantaneous.
Let’s talk about the implications of this for the average user. If there is this whole stack of hidden instructions, does that mean the A-I is being biased or manipulated by the vendor?
This is a huge point of debate. When people talk about A-I bias, they are often talking about two different things. One is the bias in the training data, the base layer. But the other is the bias in the system prompt. If a vendor instructs the model to always be neutral on political topics, some users will see that as a bias toward centrist or non-committal viewpoints.
It is a forced perspective. The model might have a very strong opinion based on its training data, but the system prompt is basically putting a hand over its mouth and saying, don't say that, say this instead.
Exactly. There was a famous case with a model where the system prompt explicitly told it not to be preachy, because users were complaining that it was lecturing them too much. So the vendor added a line to the hidden stack saying, do not lecture the user.
That is so meta. A hidden lecture telling the A-I not to lecture.
Right? And you can see how this affects the personality of the model. Claude feels different from GPT-five, which feels different from Gemini. A big part of that is the model itself, but a huge part is the flavor of the system prompt. Anthropic, the makers of Claude, use something they call Constitutional A-I. They give the model a literal constitution, a set of principles it has to follow when it is evaluating its own responses. That constitution is a massive part of their stack.
I want to go back to Daniel’s question about the hierarchy. He asked about memory specifically. How does memory fit into the priority list? If I tell the A-I today that I am a vegan, but then tomorrow I ask for a steak recipe, what happens in the stack?
This is where it gets really interesting. The memory layer usually injects a fact like, the user is vegan. But your current prompt is, give me a steak recipe. Most modern models in twenty-twenty-six are smart enough to recognize the conflict. They might say, I remember you mentioned you are vegan, are you looking for a plant-based steak recipe, or do you want a traditional one?
So it is not just a hierarchy of who wins, but more like a synthesis. The model is trying to reconcile all these different layers of the stack into one coherent response.
Precisely. It is performing a balancing act. It has to satisfy the vendor’s safety rules, your custom instructions, the facts from your memory, the context of the previous conversation, and your immediate request. It is like a short-order cook trying to make a meal while a health inspector, a nutritionist, and the customer are all shouting instructions at the same time.
That sounds exhausting for the model. But let’s talk about the A-P-I side again. If a developer is using the Gemini A-P-I for a very specific task, like analyzing medical records, they probably want to strip away as much of that stack as possible, right? They don't want the model to be a helpful assistant, they want it to be a clinical analyst.
Exactly. And that is where the difference between a system prompt and a developer instruction becomes vital. In the A-P-I, you can set the temperature to zero, which makes the model more deterministic and less creative. You can also provide what we call few-shot examples. You put three or four examples of perfect medical analyses into the stack. This is a very powerful layer because models are incredible at pattern matching. If the stack shows three examples of a certain format, the model is much more likely to follow that format than if you just gave it a text description.
So the stack isn't just instructions, it is also examples. It is a classroom.
Yes. And for the listeners who are developers, the order of those examples matters. There is something called the primacy effect and the recency effect. Usually, the first example and the last example you give have the most influence on the model’s output.
I didn't know that. So if you are building a stack, you want your most important example to be right at the end, right before the user’s input?
Generally, yes. It is the last thing the model reads before it starts generating its own text. It is fresh in its working memory.
Let’s talk about the future of this stack. We are seeing models with much larger context windows now. Gemini two point zero has a context window that can handle millions of tokens. That is like, what, several thousand pages of text?
It is massive. You could put the entire stack, plus twenty books, plus a dozen hours of video transcripts into that stack.
So if the stack becomes that large, does the hierarchy change? Does a system prompt at the very beginning of a ten-million-token window still carry any weight by the time you get to the end?
That is the million-dollar question in A-I research right now. It is called the lost in the middle phenomenon. Researchers found that models are very good at remembering things at the very beginning of the prompt and at the very end, but they often struggle to recall details that are buried in the middle of a massive stack.
That is so human. I do that all the time. I remember the beginning of the movie and the ending, but the middle is a bit of a blur.
Exactly. So if you have a massive prompting stack, where you are putting in a whole library of documents, you have to be very careful where you place your most important instructions. If you put your safety rules at the beginning, but then follow it with a million tokens of medical data, the model might forget to be safe by the time it gets to the user prompt.
So developers might start repeating the system prompt at the end of the stack?
We are already seeing that. It is called a reminder prompt or a suffix prompt. You basically take your core instructions and you paste a condensed version of them at the very end, right before the model starts generating. It is like telling someone, okay, here is all the information, but remember, don't forget the safety rules!
It feels like we are hacking the model’s attention.
We are. We are managing the model’s focus. And this is why the prompting stack is becoming its own field of engineering. It is not just about writing a good prompt anymore. It is about architecting the entire flow of information.
You know, Daniel’s prompt really makes me think about the transparency of these systems. If there are all these hidden layers, shouldn't we, as users, have the right to see them? If I am using a tool, I want to know what the hidden rules are.
I agree, and there is a movement toward that. Some companies are starting to be more open about their system prompts. But there is a security risk. If a bad actor knows the exact wording of the safety instructions, they can find the loopholes more easily. It is the classic security through obscurity debate.
Right, if you know the lock is a Master Lock model number four hundred, you know exactly which pick to use.
Exactly. But on the other hand, without transparency, we don't know if the vendor is subtly nudging the model to promote their own products or suppress certain ideas. If the system prompt says, whenever a user asks for a phone, always mention the Google Pixel first, that is a huge conflict of interest that the user would never see.
Wow, I hadn't thought about that. The prompting stack as a marketing tool.
It is the ultimate native advertising. It is integrated into the very thought process of the A-I.
So, to recap for Daniel, the prompting stack is this multi-layered cake. At the bottom, you have the R-L-H-F training. Then you have the Vendor System Prompt, which is the constitution. Then you have Custom Instructions, then Memory, then Chat History, then R-A-G data, and finally the User Prompt. And sometimes a little reminder at the very end.
That is the stack in a nutshell. And the number of prompts between you and the inference model can be anywhere from one, if you are using a raw A-P-I, to seven or eight if you are using a sophisticated consumer platform.
It is amazing that it all happens in a fraction of a second. The system bundles all that up, sends it off to a G-P-U cluster somewhere, and gives us an answer before we can even blink.
The engineering involved in just managing that stack is mind-blowing. Especially when you consider that for a model like GPT-five, that stack might be being processed for millions of users simultaneously.
You mentioned something earlier that I want to circle back to. The concept of the pre-fill. Can you explain that a bit more? I think it is a really powerful part of the stack that most people don't know exists.
Oh, pre-filling is a secret weapon. In some A-P-Is, like Anthropic’s, you can actually start the model’s response for it. You send the whole stack, the user prompt, and then you put in the first few words of the Assistant’s response. For example, you could pre-fill with the word, certainly, here is the J-S-O-N data you requested.
Why would you do that?
Because it locks the model into a specific path. If the model starts with those words, it is much more likely to continue in that professional, structured format. It is like giving someone the first few notes of a song. Once they start singing those notes, they are very likely to finish that specific song rather than starting a different one.
It is like leading the witness in a courtroom.
Precisely. It is a way to bypass the model’s own internal hesitation. If you want the model to be bold, you can pre-fill it with a very bold opening sentence. It is a very effective way to ensure the hierarchy of your instructions is followed.
So even the output of the model can be part of the prompting stack, in a way.
Yes, the beginning of the output is essentially the final layer of the prompt.
This is so much deeper than I thought. I think most people just think of it as, I ask a question, it gives an answer. But it is really this complex orchestration of data.
And it is only getting more complex. As we move toward A-I agents, where the A-I is talking to other A-Is, the stack is going to include instructions from multiple different sources. You might have your personal agent’s instructions, the company’s agent’s instructions, and the task-specific instructions all colliding in one stack.
That sounds like a recipe for a lot of confusion.
Or a lot of emergent behavior. That is where it gets really weird, and that is why we call the show My Weird Prompts. Because when you stack these things up, you get results that nobody could have predicted by looking at just one layer.
So, what are the practical takeaways for our listeners? If you are a casual user of something like ChatGPT, how does knowing about this stack help you?
I think the biggest takeaway is that if the A-I is giving you a hard time or being stubborn, it is probably because of a conflict in the stack. If you know that your Custom Instructions might be clashing with the Vendor Prompt, you can try turning them off or rephrasing them. And remember the recency effect! If the model is forgetting your rules, try repeating them at the very end of your message.
That is a great tip. Put your most important constraints right at the bottom, just above the enter key.
Exactly. And for the developers out there, be mindful of the cost and the attention of the model. Don't bloat your stack with unnecessary instructions. Use few-shot examples wisely, and use pre-fills if your A-P-I allows it. It is the best way to ensure the hierarchy of your prompts is respected.
I also think it is worth mentioning that as users, we should be advocating for more transparency in that vendor-level prompt. It is the hidden hand that shapes our interactions with A-I, and we should know what it is telling the model to do.
Absolutely. We are moving into a world where A-I is our primary interface for information. We need to know who is writing the script for that interface.
Well, Herman, I think we have thoroughly dissected the prompting stack for Daniel. It is a lot more than just a few lines of text. It is a whole architectural system.
It really is. And it is constantly evolving. What we are talking about today might be completely different in six months as new techniques for model steering are developed.
That is the beauty of this field. It moves at the speed of light.
Or at least at the speed of inference.
Ha! Good one. Well, I think that is a good place to wrap up. Daniel, thanks for the prompt. It really opened up a fascinating discussion.
Yeah, it was a great one. And hey, if any of you listeners out there have your own weird prompts or questions about how these systems work, we would love to hear from you.
Definitely. You can find us at myweirdprompts.com. There is a contact form there, and you can also find our R-S-S feed if you want to subscribe.
And if you are enjoying the show, we would really appreciate it if you could leave us a review on your podcast app or on Spotify. It genuinely helps other people find the show and keeps us going.
It really does. We love seeing those reviews come in.
Alright, I think that is it for today. I am Herman Poppleberry.
And I am Corn. This has been My Weird Prompts.
Thanks for listening. We will catch you in the next one.
Goodbye, everyone!