You know, I was watching an old video from about ten years ago of a self-driving car prototype trying to navigate a simple intersection, and it was fascinating how much it struggled with things we do without thinking. It could see the lines on the road and it could see the red light, but it had no idea what to do when a construction worker held up a hand-written sign that said "Follow the Truck." Today's prompt from Daniel is about that exact evolution, from the days when computers just saw pixels to this new era of vision-language models where the AI actually understands the context of what it is looking at. It is a shift from simply detecting objects to actually interpreting intent and navigating the messy, unscripted reality of the physical world.
It is a massive shift, Corn. By the way, I am Herman Poppleberry, and I have been waiting for us to really dig into this because the transition Daniel is asking about is essentially the story of how AI stopped being a blind calculator and started becoming a perceiver. We have moved from task-specific computer vision, where you had one model for stop signs and another for pedestrians, to these unified multimodal systems that treat an image just like a paragraph of text. We are no longer just asking "What is in this image?" We are asking "What does this image mean for the goal I am trying to achieve?"
It feels like the bottleneck has shifted. It used to be about getting enough high-quality sensor data—more cameras, more lidar, more resolution. But now it seems like the real challenge is what the agent does with that data once it understands it. We are seeing this collision between these massive vision-language models and the agentic frameworks like the Model Context Protocol, or MCP, that we talk about so much. It is the difference between having eyes and having a brain that knows how to use them to cross the street.
That is where the friction is happening right now in March of twenty twenty-six. If you look back at the history, for decades, computer vision was dominated by what we call feature engineering. Humans would literally try to describe to the computer what an edge or a corner looked like using mathematical filters. We were trying to hard-code the rules of sight. Then, around two thousand twelve, we had the big ImageNet moment with AlexNet, where convolutional neural networks took over. That was the first time the machine started learning the features itself. But even then, those models were essentially just classifiers. You would feed them an image, and they would scream "Dog!" or "Cat!" with a certain percentage of confidence. They were brilliant at labeling, but they were functionally illiterate when it came to the scene as a whole.
Right, and they were incredibly brittle. If you changed three pixels or rotated the dog forty-five degrees, the model would suddenly think it was a toaster. It was vision without any underlying logic or world model. It was all pattern matching with zero semantic grounding. But then the Transformer architecture changed the game for text, and eventually, researchers realized they could apply that same logic to images. They stopped looking for shapes and started looking for relationships.
The Vision Transformer, or ViT, was the turning point. Instead of looking for specific geometric features through convolution, it breaks an image into a grid of small squares, or patches. Each patch is treated like a word, or a token. This allowed us to use the same powerful attention mechanisms that drive models like Gemini or Claude and apply them to visual data. And that led us straight into the era of Vision-Language Models, or VLMs. We stopped treating vision as a separate engineering discipline and started treating it as just another language for the model to learn.
And that is where the multimodal magic happens. Because once you are treating image patches as tokens, you can project them into the same mathematical space as text tokens. This is what models like CLIP, or Contrastive Language-Image Pre-training, did so well back in the early twenty-twenties. They taught the AI that the word "sunset" and the actual visual pixels of a sunset represent the same concept in a high-dimensional latent space. It is like the model has a universal dictionary where an image of an apple and the word "apple" point to the same entry.
It sounds simple, but the implications are profound. It means we are no longer building models that just label things; we are building models that can reason about them. When a modern vision-language model looks at a photo of a kitchen, it is not just identifying a stove and a pot. It understands the relationship. It can tell you that the pot is on the stove and that the steam coming out of it probably means the water is boiling. That is a level of semantic understanding that was impossible with old-school convolutional networks because those networks didn't have the "language" to describe the concept of "boiling" or "danger."
But there is a massive technical tax for that understanding, isn't there? You mentioned earlier how images are broken into tokens. If I have a high-resolution image, how many tokens are we talking about? Because I know in the text world, we are always worried about context window limits. If a single image takes up half the window, how does the agent remember what it was doing five minutes ago?
You have hit on the biggest headache for vision engineers in twenty twenty-six. Images are absolute context-window killers. If you take a standard high-definition image and break it into sixteen-by-sixteen pixel patches, even a relatively modest picture can generate hundreds or even thousands of tokens. Now imagine a video feed at thirty frames per second. If you treat every frame that way, you will blow through your context window in a matter of seconds. You would have an AI that is incredibly smart but has the short-term memory of a goldfish. This is the "token bloat" problem that has plagued multimodal agents since they first hit the scene.
So that is why we have seen so much focus lately on things like the Omni-Vision-Seven-B model that came out in January. I was reading that they managed to get a forty percent reduction in visual token overhead. That seems like a massive leap forward for on-device agents. How did they actually do that without the model going blind to the details? I mean, if you just lower the resolution, you lose the ability to read a street sign or see a small obstacle.
They used a technique called dynamic token downsampling. It is a very clever way of mimicking human biological vision. Instead of treating every part of the image with the same level of importance, the model does a quick, low-resolution pass to identify areas of interest—what we call a saliency map. If there is a clear blue sky in the top half of the photo, the model realizes there is no useful information there and might represent that whole area with just a few tokens. But if there is a tiny, illegible sign or a complex human gesture in the corner, it allocates a much higher density of tokens to that specific patch. It is essentially teaching the AI where to focus its attention, much like a human eye has a fovea for high-detail focus and a periphery for low-detail context.
That makes a lot of sense. It is like the difference between staring blankly at a wall and actually scanning a room for your keys. You are optimizing your "compute" for the things that matter. But let's bring this back to the agentic side of things. Daniel mentioned the Model Context Protocol, and we have discussed in previous episodes, like episode eight hundred fifty-five, how MCP is becoming the standard for how agents interact with the web and other tools. How does a vision-language model actually fit into an MCP framework? Is the camera just another "tool" in the agent's belt?
This is where the world of autonomous vehicles and agentic AI are colliding. Traditionally, an autonomous vehicle had a very rigid, hand-coded perception stack. The cameras would see an obstacle, the obstacle would be classified as a "pedestrian," and the planning module would execute a "stop" command. It was very reactive and very brittle. But with MCP, we are moving toward a more flexible, tool-oriented approach where vision is a dynamic query rather than a fixed stream.
So instead of the car just having a hard-wired "perceive" loop that runs constantly, the agent running the car is essentially "querying" its vision system like it would query a database or a search engine?
Imagine a delivery robot navigating a busy sidewalk in downtown San Francisco. It encounters a group of people standing around a fallen scooter. In the old world, the robot might just get stuck because it doesn't have a "fallen scooter" classification in its library, or it might try to force its way through because it doesn't understand the social context. In the new agentic world, the robot can use an MCP-enabled vision tool. It sends the visual stream to a vision-language model and asks a semantic question: "What is the safest and most polite way to navigate around this specific obstruction?" The VLM looks at the scene, identifies that the people are trying to pick up the scooter, and tells the agent, "Wait five seconds for them to clear the path rather than trying to go around them into the street."
That is a huge jump in capability. It is moving from "What is this?" to "What should I do about this?" But I can already hear the skeptics. If you are sending video data through an MCP tool to a VLM, the latency must be a nightmare. You can't have a car waiting two hundred milliseconds for a model to decide if a plastic bag is a rock. In the time it takes to get an answer back from the "vision tool," the car could have already hit the object.
The latency is the primary reason why we are seeing a massive push for on-device multimodal reasoning. We are moving away from the "send everything to the cloud" model for vision. The goal for twenty twenty-six is to have these vision-language models running locally on the vehicle's hardware, but still using the standardized MCP structure. This allows the perception system to be easily updated or swapped out without rewriting the entire control system. It makes the "eyes" of the car modular. You can upgrade the VLM to a newer version just like you would update a software library, and as long as it follows the MCP standard, the rest of the agent knows how to talk to it.
It reminds me of the "restart tax" we talked about in episode ten seventy-six. If an agent is constantly starting and stopping its visual perception to save compute or because of protocol limitations, it loses the "thread" of what is happening in the physical world. If a ball rolls into the street, the agent needs to maintain the context that a child might be following it, even if the child isn't visible yet. Maintaining that visual state in an agentic loop is significantly harder than maintaining a text-based conversation state. You can't just summarize a video feed into a few sentences and expect to keep the spatial nuance.
It is exponentially harder because the state is not just a list of words; it is a four-dimensional map of the environment that is constantly changing. Current research suggests that autonomous perception stacks are now spending roughly sixty-five percent of their total compute budget just on these cross-modal attention mechanisms—basically just trying to make sure the visual tokens and the spatial map tokens are actually talking to each other correctly. They are fighting to keep the "mental model" of the world synchronized with the raw pixels coming in. If the agent loses that synchronization, it suffers from a visual version of the restart tax where it has to "re-orient" itself to the scene, which can take several critical frames.
So, looking at the history Daniel asked about, we went from "What is this pixel?" in the nineteen-nineties, to "What is this object?" in the two thousand tens, to "What is happening in this scene?" in the early twenties, and now we are at "What does this scene mean for my current mission?" It is a move toward intent-based vision. We are giving the AI the ability to ignore ninety-nine percent of what it sees so it can focus on the one percent that actually matters for its goal.
That is the perfect way to put it. Old vision systems were descriptive; new vision systems are prescriptive. They don't just tell you what is there; they tell you what it means. And this is why the collision with text-level frameworks like MCP is so interesting. MCP was originally designed to let Large Language Models use text-based tools like Google Search or a Python interpreter. But now, we are realizing that a camera is just another tool that returns a specialized type of context. If an agent can "call" a vision model to explain a complex scene, it can solve problems that were previously untouchable, like understanding a detour sign that was spray-painted on a piece of plywood.
I wonder if we are going to see a "Visual MCP" standard emerge. Right now, when you send an image to a model, every company has their own way of doing it—different tokenization, different aspect ratio handling, different metadata. But if we want agents to move seamlessly between different robots or different camera systems, we need a standardized way to describe spatial data. We need a common language for "There is a person three meters ahead on the left who looks like they are about to step into the road."
I think that is inevitable. We need a protocol that doesn't just pass pixels but passes "semantic fragments." If my home security agent sees a package being delivered, it should be able to pass a high-level semantic description of that event to my personal assistant agent using a standard protocol, without having to re-process the raw video every time. We are moving toward a world where "sight" is a shared service across all our devices. Your glasses see something, your phone understands it, and your car prepares for it, all through a unified agentic framework.
It is funny because as much as we talk about the "eyes" of AI, it always comes back to the "brain" and how it organizes information. We are basically trying to give AI a sense of sight that is integrated with its ability to reason. But Herman, do you think there is a danger in over-relying on the "language" part of vision-language models? I mean, humans can catch a ball without being able to describe the physics in words. Our reflexes are faster than our internal monologue. Are we making AI too "chatty" with its vision? Are we adding unnecessary overhead by forcing it to turn everything into tokens?
That is a legitimate debate in the field right now. Some people argue that for fast, reactive tasks like high-speed driving or industrial robotics, we should stay with pure vision models that don't have a language component because they are faster and more efficient. They are "all reflex, no thought." But the counter-argument is that language is the best way we have to represent complex logic and common sense. By forcing the vision system to "speak" to the agentic brain, we are giving it access to all the reasoning capabilities the model learned during its massive text training. It is the difference between a reflex and a conscious decision. If you want a robot to not just "not hit" a person, but to "be helpful" to a person, it needs that linguistic, semantic layer.
Which brings us back to Daniel's prompt about the current role of vision in autonomous vehicles versus these text-level frameworks. It seems like the vehicles are the ultimate "edge case" because they require both the lightning-fast reflex and the high-level semantic reasoning. They are the place where the "do" AI and the "chat" AI have to become one thing. They have to be able to slam on the brakes in ten milliseconds, but also understand why a police officer is waving them through a red light.
And that is why the work being done on things like the Model Context Protocol is so critical. It provides the plumbing that allows those two different types of intelligence to coexist. We are moving toward a world where the vision system isn't just a sensor; it is a primary context stream that is just as important as the text instructions the agent received. In an MCP manifest for a twenty twenty-six autonomous agent, you might see a tool called "visual_reasoner" that takes a goal and a video stream and returns a set of logical constraints. It is a beautiful synthesis of two very different branches of computer science.
So, for the developers listening who are building these agentic systems, what should they be taking away from this? It sounds like the advice is to stop treating vision as a pre-processing step where you just get a list of objects and coordinates.
If you are still using a separate model to "detect boxes" and then feeding a list of coordinates into your agent, you are living in twenty twenty-two. The modern approach is to give your agent direct access to a multimodal model that can reason about the visual field. You want your agent to be able to ask questions like, "Is the person on the sidewalk looking at their phone or looking at me?" or "Does the texture of the road suggest it might be icy?" That kind of nuance is only available when you have that tight integration between vision and reasoning. You need to treat pixels as a rich source of intent, not just a collection of shapes.
And I think we also need to keep an eye on the "Visual MCP" shift. If you are building tools for agents, think about how those tools can provide visual context in a standardized way. Don't just return a JPEG; return a structured set of visual tokens or a semantic scene graph that the agent can actually use to plan its next move. We need to build the "connective tissue" that allows sight to translate into action across different platforms.
And keep an eye on the on-device models. The January release of Omni-Vision-Seven-B showed us that you don't need a massive, trillion-parameter cloud model to have world-class visual reasoning. We are getting to the point where a small, efficient model running on a local chip can handle ninety-nine percent of the visual tasks an agent needs. That is going to unlock a whole new generation of autonomous hardware—from drones that can follow complex verbal instructions to kitchen robots that can actually tell if your steak is cooked to medium-rare just by looking at the color and texture.
It is wild to think about how far we have come from those early ImageNet days. We have gone from "Is this a cat?" to "Is this cat about to jump on my keyboard and ruin my podcast recording?" The level of predictive, intent-based understanding is just on a different planet now. We are essentially giving machines a "theory of mind" for the physical world.
It really is. And as we move toward "World Models," where the AI isn't just processing images but is actually simulating the physics of the world it sees, the line between "seeing" and "thinking" is going to disappear entirely. We won't even talk about "vision-language models" in a few years; we will just talk about models that understand reality, regardless of whether that reality comes in through a camera, a microphone, or a text file. Sight will just be one of many windows into a single, unified understanding of the world.
That is a slightly terrifying but incredibly exciting thought to end on. If the AI understands reality better than we do, I just hope it still finds our jokes funny. Or at least understands the "intent" behind a bad pun.
I wouldn't count on it, Corn. Your jokes are a very specific kind of edge case that even the best VLMs struggle to classify. They usually just flag them as "unidentified noise" and move on.
Ouch. Well, on that note, I think we have covered a lot of ground today. From the early days of CNNs to the current collision of VLMs and agentic protocols like MCP, it is clear that the way computers "see" is fundamentally tied to how they "act" in the world. Sight is no longer a passive observation; it is an active part of the decision-making loop.
It has been a fun deep dive. Daniel always sends us prompts that make me realize how fast the "plumbing" of AI is changing under our feet. We are building the nervous system of the future in real-time.
It really is. Thanks as always to our producer, Hilbert Flumingtop, for keeping the show running smoothly behind the scenes and making sure our own visual-audio sync is perfect.
And a big thanks to Modal for providing the GPU credits that power the generation of this show. Their serverless infrastructure is what makes this kind of technical exploration possible, especially when we are crunching through these massive multimodal datasets.
This has been My Weird Prompts. If you are enjoying these deep dives into the agentic age, we would love it if you could leave us a review on your favorite podcast app. It really helps other curious minds find the show and join the conversation.
Until next time, I am Herman Poppleberry.
And I am Corn. We will see you in the next one. Find us at myweirdprompts dot com for the full archive and all the ways to subscribe. Take care and keep your eyes on the road—even if your car is doing the same.