Welcome back to My Weird Prompts, everyone. I am Corn, and I am sitting here in our living room in Jerusalem, looking out at a surprisingly clear sky today. It is January ninth, twenty twenty-six, and it feels like a good day to dive into something that has been literally hovering over our heads for the last few years.
And I am Herman Poppleberry. That clear sky is actually quite relevant to what we are talking about today, though as we will find out, clouds are not the barrier they used to be for the eyes in the sky. Our housemate Daniel sent us a fascinating prompt this morning. He has been thinking about the role of satellite intelligence, or S-A-T-I-N-T, specifically during the conflict between Iran and Israel that we all watched unfold so closely last year.
It is a great prompt because it touches on something we all experienced. When things were heating up, you could not refresh your social media feed without seeing these high-definition images from companies like Maxar or Planet Labs showing missile bases, damage at airfields, or ship movements in the Persian Gulf. Daniel wants to know what the real difference is between that commercial stuff we see on the news and the top-secret data the intelligence agencies are using. Is the gap actually closing, or are we still seeing a pixelated version of the truth?
It is the classic question of the democratized versus the classified. For decades, the ability to see a person’s shadow from space was a superpower reserved for only two or three nations. Now, anyone with a credit card and a legitimate reason can buy imagery that would have been considered a state secret in the nineteen nineties. But to Daniel’s point, the gap is a bit like the difference between a high-end consumer camera and a multi-billion dollar telescope. They both take photos, but they are playing different games.
Right, and before we get into the resolution wars, I want to make sure we address the other half of Daniel’s question, which is about the sheer volume of data. Even if you have the best cameras in the world, you are basically trying to find a needle in a hayfield that is the size of a planet. That is where the artificial intelligence comes in. How do you triage millions of square kilometers of imagery every day to find one specific decoy or one subtle change in a dirt road?
Exactly. The human eye is amazing, but it does not scale. If you want to monitor every missile silo in central Iran twenty-four seven, you need an A-I that never sleeps and never gets bored. So, let’s start with the resolution. That is usually what people focus on first. When you see a Maxar image on the news, you are usually looking at what they call thirty-centimeter resolution. That means each pixel in the image represents a thirty-centimeter by thirty-centimeter square on the ground.
So, about the size of a standard ruler. You can see a car, you can tell it is a car, and you can probably tell if it is a truck or a sedan. But you are not reading the license plate.
Correct. Now, the legal limit for commercial companies used to be much stricter. The United States government actually regulates how clear commercial images can be. For a long time, you could not sell anything better than fifty centimeters. Then it dropped to forty, and now we are at thirty. But here is the thing, the classified satellites, like the ones operated by the National Reconnaissance Office, the N-R-O, are rumored to be at ten centimeters or even lower.
Ten centimeters. That is the width of a palm. At that level, you are not just seeing a tank, you are seeing the specific hatch configuration or the type of external fuel tank it is carrying. But Herman, is it just about the glass? Is it just a bigger lens, or is there more to the secret sauce?
It is a lot of things. It is the size of the mirror, yes. If you look at the Keyhole eleven or Keyhole twelve satellites, which are the backbone of American S-A-T-I-N-T, they are essentially the Hubble Space Telescope, but pointed down instead of up. They have massive primary mirrors. But it is also about the processing. The military uses advanced adaptive optics to clear up atmospheric distortion. You know how the air shimmers over a hot road?
Yeah, it blurs everything.
Exactly. The atmosphere does that to satellite photos too. Classified systems have much more sophisticated ways of mathematically removing that shimmer in real-time. Also, their orbits are different. Commercial satellites usually stay in a sun-synchronous orbit, which means they pass over the same spot at the same time every day, usually mid-morning, to get the best shadows for depth perception.
That makes sense. It is like having a consistent lighting setup for a photoshoot. But if I am a military commander, I do not want to wait until ten in the morning to see if the enemy is moving. I need to know at two in the morning, or during a sandstorm.
And that is where the gap really widens. This is where we get into S-A-R, or Synthetic Aperture Radar. Daniel mentioned remote surveillance systems, and S-A-R is the king of that world right now. Unlike a regular camera that just captures reflected sunlight, a S-A-R satellite blasts its own microwave pulses down to Earth and measures how they bounce back.
So it is essentially taking a picture with radio waves. And since radio waves go right through clouds, smoke, and darkness, you get a clear image regardless of the weather.
Precisely. During the tensions last summer, when there was heavy cloud cover over certain parts of the region, the optical satellites were blind. But the S-A-R satellites kept right on clicking. Now, commercial S-A-R has exploded recently. Companies like Umbra and Capella Space are selling S-A-R imagery with incredible detail. You can see the texture of the ground. You can see if a patch of dirt was recently disturbed because the radio waves bounce differently off loose soil versus packed soil.
That feels like a massive leap for open-source intelligence. If the public can see through clouds now, the old trick of waiting for a rainy day to move your equipment does not work anymore. But I imagine the government versions of S-A-R are still a generation ahead in terms of what they call phase information, right?
Oh, absolutely. The classified S-A-R can do something called Coherent Change Detection. This is mind-blowing, Corn. They can take two radar images of the same spot from different times and compare the phase of the waves. If someone walked across a field of grass between the first and second photo, the radar can detect that the blades of grass were slightly bent. It shows up as a glowing trail on the map.
Wait, so you aren’t even seeing the person, you are seeing the physical memory of their footsteps in the grass from space?
Yes. It is called a foot-print in the phase data. You can see where a vehicle drove across a desert, even if there are no visible tracks to the human eye. Commercial companies are starting to offer basic change detection, but that level of sub-millimeter precision is still very much in the realm of the big intelligence agencies.
So the gap in resolution is narrowing, but the gap in the type of physics they are exploiting is still pretty wide. Let’s talk about the A-I side of this. Daniel asked how A-I triages this stuff. We have thousands of satellites up there now. Between the big Maxar birds and the tiny cubesats from Planet, we are generating petabytes of imagery every single day. There are not enough humans in the world to look at every frame.
No way. Not even close. If you had a hundred thousand analysts, they would still miss ninety-nine percent of the data. This is where computer vision and deep learning have completely transformed the field in just the last three or four years. In twenty twenty-six, the A-I is not just looking for a tank. It is doing what we call pattern-of-life analysis.
Explain that. What does a pattern of life look like to a satellite?
Think about a missile base. A human analyst might look at it once a day and say, yup, the trucks are still there. But an A-I can monitor a low-resolution feed from a constellation like Planet, which takes a photo of the entire Earth every single day. The A-I counts the number of cars in the parking lot at eight in the morning. It notices that usually, there are fifty cars, but today there are two hundred. It notices that the security gate is staying open thirty seconds longer than usual. It flags that as an anomaly.
So it is looking for the weirdness in the routine. It is not necessarily identifying a weapon; it is identifying a change in behavior that suggests a weapon might be being prepared.
Right. And then, once the A-I flags that anomaly, it automatically tasks a high-resolution satellite, like a Maxar WorldView or a government Keyhole, to go take a much closer look. It is a tip-and-triage system. The cheap, low-res satellites act as the tripwire, and the expensive, high-res ones act as the microscope.
That is a huge efficiency gain. But what about the deception part? Daniel mentioned decoys. We know Iran is famous for this. They build fake aircraft carriers out of wood and metal. They have inflatable missiles that look identical to the real thing from a certain height. Can A-I tell the difference between a real missile and a balloon?
This is the frontline of the current intelligence war. It is essentially a game of Generative Adversarial Networks, or G-A-N-s, but in the real world. The people building the decoys are trying to fool the A-I, and the people training the A-I are using examples of decoys to make the system smarter.
How does the A-I spot the fake? Is it looking at the shadows?
Shadows are a big one. An inflatable decoy often has a slightly different center of gravity or a different surface tension, which changes how light reflects off it. But the real giveaway is often the thermal signature or the spectral signature. This brings us to another type of S-A-T-I-N-T: hyperspectral imaging.
I remember we touched on this briefly in episode one hundred five when we talked about A-I benchmarks. Hyperspectral is where you aren’t just looking at red, green, and blue light, right?
Exactly. A standard camera sees three bands of light. A hyperspectral sensor might see hundreds of very narrow bands, including infrared and ultraviolet. Every material on Earth has a unique spectral fingerprint. Painted plywood has a different fingerprint than the radar-absorbent coating on a real stealth drone. A rubber decoy has a different thermal mass than a ten-ton steel missile.
So, even if it looks perfect to a human eye in a standard photo, the A-I looking at the hyperspectral data says, wait a minute, this object is not heating up at the same rate as a piece of metal should, or it is reflecting a specific frequency of infrared that suggests it is made of polymers, not aerospace-grade aluminum.
Precisely. And the A-I is much better at spotting those subtle spectral discrepancies than a human. It can also look at the context. If you see a missile launcher in the middle of a field, but there are no tire tracks leading to it, the A-I is going to flag that as a likely decoy. A real multi-ton vehicle cannot just teleport into a field without disturbing the soil.
That is a great point. It is the holism of the data. The A-I is checking the object, the soil around it, the thermal history of the spot, and the recent movement patterns. It is much harder to fake a whole ecosystem of activity than it is to fake a single object.
It really is. But we shouldn't underestimate the low-tech counter-intelligence. During the conflict last year, we saw reports of forces using simple things like smoke screens or even just large tarps with patterns painted on them to confuse the A-I’s edge-detection algorithms. If you can make a square building look like a jagged pile of rocks to a computer, you might buy yourself enough time to move your assets.
It’s like those dazzle camouflage patterns they used on ships in World War One. They aren’t trying to make the ship invisible; they are trying to make it impossible for the observer to tell its speed and heading. In twenty twenty-six, we are seeing digital dazzle designed to make an object un-classifiable by an A-I.
It’s a fascinating arms race. And it’s not just about the big satellites anymore. Daniel mentioned remote surveillance systems. We have to talk about the integration of S-A-T-I-N-T with H-A-L-E drones, which stands for High-Altitude Long-Endurance.
Like the Global Hawk or the newer solar-powered ones that can stay up for months at a time?
Exactly. These are basically atmospheric satellites. They fly at sixty thousand feet, way above commercial air traffic. They don't have the orbital mechanics of a satellite, so they can just hover over a single city or military base for days. When you combine the persistent gaze of a H-A-L-E drone with the global reach of a satellite constellation, you get what the military calls unblinking eye capability.
That sounds incredibly intense from a privacy and security perspective. If you are living in a zone under that kind of surveillance, there is literally no moment where you are not being watched from multiple angles and multiple spectrums.
It is the end of secrecy in the traditional sense. We talked about this a bit in the O-S-I-N-T episode last week, number two hundred seven. But the satellite component adds a layer of truth that is very hard to argue with. During the Iran-Israel conflict, one of the most interesting moments was when a government would claim they hit a target, and within two hours, an open-source analyst on the internet would post a Maxar image showing that the missile actually landed in an empty field.
Right! The fog of war used to last for days or weeks. Now, the fog is cleared by a satellite pass within ninety minutes. It changes the political calculus. You can't lie about your successes or failures as easily when the receipts are available for purchase by any news agency.
But that brings up a danger, too. Remember when we were looking at those images of the Isfahan airbase? There was a huge debate online because different satellites showed different things. One S-A-R image made it look like a hangar was destroyed, but an optical image from a few hours later showed it was just a clever paint job or a shadow.
That is where the gap Daniel asked about gets dangerous. If the public is using thirty-centimeter imagery and making life-and-death conclusions, but the government has ten-centimeter imagery that shows a completely different story, you get this weird divergence of reality. The public thinks one thing is happening, while the leaders know something else.
And that is a gap that might never fully close. Even as commercial resolution gets better, the government will always keep the best version for themselves. There is actually something called the Kyl-Bingaman Amendment in the U.S. that specifically limits the resolution of imagery of Israel that can be sold commercially.
Wait, I thought that was overturned or changed recently?
It was softened in twenty twenty, allowing for better resolution than before, but there are still national security shutters. The government can essentially tell Maxar or Planet, you are not allowed to collect or sell imagery of this specific coordinate for the next forty-eight hours. It is called shutter control.
So even if the tech is there, the legal and political shutter can still create a black hole in our knowledge. That is a sobering thought. We think we see everything, but we only see what the licensing allows us to see.
Exactly. Now, let’s pivot back to the A-I for a second, because there is a really cool development in twenty-six that I think Daniel would find interesting. It is called on-orbit processing.
Instead of sending the raw data down to Earth, the satellite does the thinking itself?
Yes! Traditionally, a satellite takes a massive photo, which is a huge file, and has to wait until it passes over a ground station to dump that data via radio. That creates a bottleneck. You might have a photo of a missile launch, but it takes an hour to get it to an analyst.
And in a missile conflict, an hour is an eternity.
Right. Now, we are putting A-I chips—like specialized N-P-U-s, or Neural Processing Units—directly on the satellites. The satellite looks at the image as it takes it. The A-I says, Nothing happening here, just desert... nothing here... wait, that is a flame trail from a missile. It then sends a tiny, tiny data packet—just a few kilobytes—via a laser link to a relay satellite like Starlink.
And that alert reaches the commander in seconds.
Seconds. The satellite essentially says I saw something, look here now. It doesn't even send the photo initially; it just sends the coordinates and the classification. That is how you get real-time S-A-T-I-N-T. It is not just a photo anymore; it is an automated sensor network.
That is a massive shift. It turns the satellite from a camera into a scout. I wonder how that affects the decoy problem. If the A-I on the satellite is making split-second decisions about what is important, could it be tricked into ignoring a real threat because it looked like a decoy?
That is the ultimate nightmare for an intelligence officer. It is called a false negative. If the enemy knows how your A-I classifies things, they can design their real weapons to look like uninteresting objects. They might make a mobile launcher look like a standard refrigerated shipping container. The A-I sees a thousand containers a day, so it ignores it.
It is the hiding in plain sight strategy, but optimized for a machine's blind spots. It makes me think about how much we rely on these systems. Herman, do you think we are getting to a point where the A-I is so good that humans are being phased out of the loop?
I don't think so. I think the role of the human is shifting. We used to need humans to be detectors—finding the thing. Now, we need humans to be interpreters—understanding the why. An A-I can tell you that there are ten more planes on the tarmac today. But a human analyst who understands the local politics, the history of that specific unit, and the current diplomatic chatter can tell you that those ten planes mean an attack is imminent, or just that there is a training exercise for a holiday parade.
Context is the one thing A-I still struggles with. It can see the what but not the so what.
Exactly. And that brings us to the remote surveillance part of Daniel's question. It isn't just satellites. In the conflict last year, we saw the integration of ground-based sensors too. Things like seismic sensors that can hear the vibration of a heavy truck from miles away, or acoustic sensors that can triangulate the launch of a drone by the sound of its engine.
And all of that data gets fed into the same common operating picture as the satellite imagery.
Right. It is called Multi-INT. You take the S-A-T-I-N-T, you layer on the S-I-G-I-N-T—which is signals intelligence, like intercepted radio or cell phone pings—and you add the H-U-M-I-N-T, or human intelligence. When the satellite sees a truck, the radio intercept hears the driver talking, and the ground sensor feels the weight of the vehicle, you have a very high-confidence identification.
It’s like a jigsaw puzzle where every type of intelligence is a different shaped piece. Without the satellite piece, you don't know where the truck is. Without the signal piece, you don't know who is in it.
That is a perfect analogy. And for Daniel’s question about the gap narrowing—the real narrowing isn't just in the resolution of the photos. It is in the accessibility of the Multi-INT. Ten years ago, only the C-I-A could do this kind of cross-referencing. Today, a dedicated group of researchers on Discord can combine commercial satellite photos with public flight tracking data and leaked telegram messages to track a general across a continent.
The power of the crowd is the real new frontier. It’s not just one A-I in a basement at the Pentagon; it is thousands of people using hundreds of different tools. But that also leads to a lot of misinformation. We saw that during the war too—people misidentifying old footage or misinterpreting shadows to claim a victory that didn't happen.
Oh, the Twitter analysts were a nightmare sometimes. There was that one viral thread claiming a major port had been destroyed based on a low-res satellite image, but it turned out the smoke was just a low-hanging cloud. That is why the triage A-I is so important. It provides a level of objective, mathematical consistency that humans—especially humans with a political bias—often lack.
So, if we were to summarize the state of S-A-T-I-N-T for Daniel in twenty twenty-six: the commercial world has caught up to where the secret world was maybe ten or fifteen years ago. We can see through clouds, we can see at night, and we can see changes in the dirt from footprints. But the secret world has moved on to real-time, on-orbit A-I processing and hyperspectral analysis that can tell the difference between a real tank and a rubber one by its chemical signature.
Exactly. And the unblinking eye is becoming a reality. The ability to hide anything of significant size on the surface of the Earth is essentially gone. If it is larger than a breadbox and it is outdoors, someone—or some thing—knows it is there.
It makes me think about the future of conflict. If you can't hide, you have to move. Or you have to deceive. Maybe the future isn't about bigger bombs, but about better decoys and more confusing patterns of life.
It already is. We are seeing stealth architecture now—buildings designed with angles that reflect radar waves away from satellites, just like a stealth fighter. We are seeing underground facilities being built deeper and with more sophisticated entrances to hide the spoil—the dirt taken out during construction—which is a huge giveaway for satellite analysts.
Right, because if the satellite sees a giant pile of fresh dirt appearing in the desert, it knows you are digging a hole.
Exactly. You have to hide the dirt, you have to hide the trucks, and you have to hide the heat. It is a total-effort game of hide-and-seek.
This has been such a deep dive, Herman. I feel like I need to go put a tarp over my car now, just in case.
Just make sure it’s not a tarp with a spectral signature that looks like a missile launcher, Corn. That would be a bad day.
Good point. So, what are the practical takeaways for our listeners? I think the first is to be skeptical of satellite proof you see on social media unless it has been verified by multiple sources or high-res imagery. Shadows and clouds can play a lot of tricks on a low-res sensor.
And the second is to realize that privacy in the traditional sense is becoming a luxury of the indoors. The technology that tracks a missile in Iran is the same technology that can track the type of shoes you are wearing if you are standing in your backyard.
It’s a brave new world of transparency, whether we want it or not. Daniel, thank you for that prompt. It really forced us to look at the conflict from a totally different perspective—one that is about thirty thousand miles up.
Or about three hundred miles up for the low-earth orbit stuff, but who's counting?
You are, Herman. You are always counting. Before we wrap up, I want to say a huge thank you to everyone who has been listening. We have been doing this for over two hundred episodes now, and the community of weird prompt seekers just keeps growing. If you are enjoying the journey with us, please take a second to leave a review on Spotify or wherever you get your podcasts. It genuinely helps other curious minds find the show.
It really does. And if you have a prompt of your own—maybe something that popped into your head while looking at the stars or reading the news—go to myweirdprompts.com and send it our way. We love digging into these rabbit holes.
We really do. You can also find our full archive and R-S-S feed there. This has been My Weird Prompts. I’m Corn.
And I’m Herman Poppleberry. We will see you next week, probably from a slightly different angle.
Stay curious, everyone. Goodbye!
Bye!