Welcome back to My Weird Prompts. I am Corn, and I have been staring at my laptop screen for about three hours today just trying to wrap my head around the latest news. It is Tuesday, February seventeenth, twenty twenty-six, and the headlines are moving faster than I can process. It is a heavy one today, but luckily, I am not alone in this house of ours.
Herman Poppleberry, at your service. And yes, it is a heavy one. I think our housemate Daniel really outdid himself with this prompt. He has been watching the same headlines we have, and it is a fascinating, if slightly terrifying, intersection of safety-first artificial intelligence and the most intense application of technology there is. We are talking about the literal front lines of modern conflict.
It really is. Daniel was asking about this recent disclosure that Anthropic, the company that created the Claude models, is now being used by the United States military through a massive partnership with Palantir and Amazon Web Services. And they are deploying it on these incredibly secure networks like the Secret Internet Protocol Router Network, or SIPRNet.
Right. And for anyone who does not know, SIPRNet is basically the military’s version of the internet, but totally isolated for classified information. Seeing a model like Claude three point five Sonnet being deployed there is a massive shift. Anthropic has always positioned itself as the safety company. They talk about Constitutional AI and making sure their models have a moral compass, so to speak. Seeing them enter the defense space is a huge signal that the "safety" brand is now being marketed as "reliability" for the Pentagon.
Exactly. And Daniel has these two really specific, technical questions that I think we need to peel apart. He wants to know what kind of model actually pilots a drone, how you turn a video feed into flight controls, and then, the big one, the current state of autonomous weapons. Are we looking at Skynet, or is there still a person with their finger on the button?
I love that he went straight for the mechanics. Because you are right, Corn, there is a massive difference between what Claude does, which is processing language and making high-level decisions, and what a flight controller does, which is reacting in milliseconds to wind shear or a moving target. We are talking about the difference between a General planning a campaign and a pilot pulling a high-G turn.
Let us start there then. When people hear AI pilot, they might think of Claude sitting in a virtual cockpit, reading the dials and talking to air traffic control. But that is not really how it works, is it?
Not at all. If you tried to use a Large Language Model to fly a drone, you would crash in about half a second. The latency alone would kill you. LLMs are autoregressive, meaning they predict the next token one by one. That is way too slow for flight. When we talk about AI pilots, we are usually talking about a completely different architecture, often based on Deep Reinforcement Learning and Computer Vision.
Okay, so walk me through that. If I have a drone and I want it to fly itself based on a video feed, what is the brain actually doing?
Think of it as a two-part system. First, you have the perception layer. This is usually a Convolutional Neural Network, or more recently, a Vision Transformer. Its job is to take that raw video feed, which is just millions of pixels, and turn it into meaning. It is not just "seeing" a picture; it is performing object detection, semantic segmentation, and depth estimation. It says, "that cluster of gray pixels is a concrete wall," and "that moving green shape is a vehicle."
And then it has to decide what to do with that information.
Exactly. That is where the Reinforcement Learning agent comes in. Instead of being programmed with a set of rules like "if you see a wall, turn left," the agent is trained in a high-fidelity simulator. We give it a goal, like "reach this coordinate while avoiding obstacles," and we give it a reward every time it gets closer and a massive penalty every time it crashes.
So it learns through trial and error, but millions of times over in a virtual world.
Precisely. It develops what we call a "policy." This policy is a mathematical function that takes the state of the world as an input—what it sees, its current speed, its orientation—and outputs the exact control signals for the motors. We are talking about adjusting the voltage to four different rotors hundreds of times per second. It is a continuous feedback loop that bypasses human-readable language entirely.
I remember seeing that research from the University of Zurich—the Swift AI—where their pilot beat the world champion drone racers. They were using this exact setup. But the thing that blew my mind was that the AI was taking paths that humans literally could not conceive of because it was calculating the physics of the air in real time.
That is the power of it. And to answer Daniel’s question about how it is adapted, it is all about sensor fusion. The model is not just looking at video. It is also taking in data from an Inertial Measurement Unit, which measures acceleration and rotation, and maybe a Global Positioning System and a barometer. The adaptation happens in the training phase where you force the model to handle "noise." You tell it, "okay, now fly but imagine the camera is slightly blurry, or there is a fifty mile per hour gust of wind, or the GPS signal is being jammed." By the time it hits a real drone, it is incredibly robust.
So, if that is the pilot, where does Anthropic fit in? Why is the military so keen on Claude if it is not the one doing the actual flying?
This is where we get into the distinction between tactical and operational AI. Claude is an agentic model. It is great at synthesis and reasoning. Imagine a commander who has to look at five different drone feeds, three satellite images, and a hundred pages of intelligence reports. Claude can synthesize all of that in seconds. It can say, "Based on these feeds, there is an eighty percent probability of an ambush at coordinates X and Y. I suggest rerouting the drones to the north."
So it is more like the mission planner or the intelligence officer rather than the guy with his hands on the stick.
Exactly. It is the connective tissue. It can take a high-level command like, "Search this village for a specific vehicle," and then it can coordinate the smaller, faster AI pilots to carry out that task. And because Anthropic has focused so much on "Constitutional AI," the military sees it as a more reliable partner. They do not want a model that "hallucinates" a fake enemy or ignores a direct order because of a weird prompt injection. They want a model that follows the Rules of Engagement to the letter.
That makes sense. But it leads us right into Daniel’s second question, which is the one that everyone is worried about. The weapon systems. What is the current state of play there? Are we already at the point where a machine makes the decision to fire?
This is the most sensitive area of defense technology right now. Officially, the United States Department of Defense follows Directive three thousand point zero nine, which was updated recently to account for these new models. It basically mandates that there must be "appropriate levels of human judgment" over the use of force. We call this "human-in-the-loop."
"Appropriate levels of human judgment." That sounds like a phrase written by a lot of lawyers to give themselves some wiggle room.
You hit the nail on the head. There is "human-in-the-loop," where a human has to actively authorize every single kinetic action. Then there is "human-on-the-loop," where the system can act autonomously, but a human is monitoring it and can intervene at any time. And then there is "human-out-of-the-loop," which is full autonomy.
And where are we actually? Not just in the policy papers, but on the ground in twenty twenty-six?
We are firmly in the transition between "in-the-loop" and "on-the-loop." Take a system like the Iron Dome or the newer David’s Sling. They are highly automated. When a rocket is fired, the system calculates the trajectory, decides if it is a threat to a populated area, and prepares an interceptor in a matter of seconds. A human is there to oversee it, but the speed of the engagement is so fast that the human is really more of a safety switch than an active pilot.
Right, because if you waited for a human to look at the screen and say, "Yes, that is a rocket, please fire," the rocket would have already hit.
Exactly. So for defensive systems, we have accepted a high degree of autonomy for a long time. But for offensive systems, like a drone carrying a missile, the line is much sharper. Currently, for the vast majority of U.S. systems, a human still has to look at a target on a screen and pull a trigger or click a mouse. The AI might find the target, it might track the target, and it might even fly the drone to the target, but the final "lethal" decision is human.
But Daniel mentioned the Harpy drone. I have read about that. That is an Israeli-made system that is often cited as one of the first truly autonomous weapons. How does that work?
The Harpy is what we call a loitering munition. It is basically a drone that is also a bomb. You launch it, and it flies over an area. It is programmed to look for specific radar signatures, like those from an enemy air defense system. If it finds one, it dives on it and destroys it. Once it is launched, it does not necessarily need a human to tell it to strike. It is searching for a pre-defined signature.
So that is essentially a fire-and-forget autonomous weapon.
It is. But the military would argue that the "human judgment" happened when they programmed the drone and launched it into a specific area where they knew only enemy radars would be active. It is a fine line. However, what we are seeing now with the U.S. military’s "Replicator" initiative—which aims to deploy thousands of cheap, autonomous systems—is a shift toward even more complex decision-making.
This is where it gets scary for me. If the AI is not just looking for a radar signature, but is instead using a vision model to identify people or vehicles based on fuzzy criteria, the margin for error grows, doesn't it?
It does. And that is why the conversation about AI safety is so different in a military context. Most people think of AI safety as preventing a chatbot from saying something rude. In the military, AI safety is about making sure the model does not misidentify a school bus as a tank. This is where Anthropic’s "Constitutional AI" comes in. They are trying to bake the Geneva Conventions and specific Rules of Engagement directly into the model's architecture.
I think there is this misconception that the military wants the most aggressive AI possible. But if you are a general, the last thing you want is an unpredictable weapon. You want something that follows the rules perfectly every single time.
That is exactly why they are interested in Anthropic. If you can prove that your model is "steerable"—meaning it won't deviate from its core principles even in the chaos of battle—that is worth more than any fancy flight maneuver. But the risk, of course, is the "black box" problem. If the AI makes a mistake, who is responsible? You cannot court-martial an algorithm.
Right, and if you have a thousand drones all making autonomous decisions in a split second, you could have an accidental escalation of a conflict before any human even knows what is happening. We have seen "flash crashes" in the stock market because of high-frequency trading algorithms. Imagine a "flash war."
That is the nightmare scenario. And it is why there is such a push for international norms. But here is the reality, Corn. We are already seeing AI-assisted targeting being used in active conflicts today. There were reports about the "Lavender" system and "Where's Daddy?" systems used in Gaza, which allegedly used AI to identify thousands of potential targets for human review. Even if there is a human-in-the-loop, if that human is just rubber-stamping a list of a hundred targets every hour because the AI is "usually right," are they really exercising judgment?
It becomes a checkbox. The human is just a legal fig leaf at that point.
Exactly. And that is the danger of the speed of AI. It moves so much faster than human cognition that we risk becoming the slow, weak link in the chain. Eventually, we might get pushed out of the loop entirely because we are too slow to be useful in a modern electronic warfare environment.
Let us circle back to the Claude and Palantir partnership for a second. Palantir has this platform called AIP, the Artificial Intelligence Platform. They have these demos where they show a commander talking to a chatbot. The commander says, "Show me all enemy movements in this sector," and the AI highlights them on a map. Then the commander says, "What are my options for neutralizing that artillery battery?" And the AI lists three different plans, including the probability of success and the estimated collateral damage for each.
Right. That is the agentic part. And notice that in that scenario, the AI is not pulling the trigger. It is acting as a "force multiplier" for the commander. It is giving them better information, faster. That is the immediate use case. It is data synthesis on steroids. It can read through intercepted communications, satellite imagery, and ground reports all at once.
But how long until the commander just says, "Okay, execute the option with the highest probability of success and the lowest collateral damage, and let me know when it is done"?
That is the slippery slope. And that is why the transparency of these models is so important. If the military is using closed-source models like Claude, we have to trust that the guardrails are actually there.
It is interesting that the military is choosing closed-source. Usually, they want to own every piece of the tech. Why go with Anthropic?
I think it is because the pace of development is so fast that even the Pentagon cannot keep up. If they tried to build their own Large Language Model from scratch, it would be two generations behind by the time it was deployed. By partnering with Anthropic via Palantir and using AWS’s secure cloud, they get the cutting edge immediately. And because it is on SIPRNet, the data stays in their walled garden. Anthropic is not getting the military’s secrets to train Claude four or five.
At least, that is what the contract says. But it still feels like a major shift for a company that was founded by people who left OpenAI specifically because they were worried about the safety and commercialization of AI.
It is a massive pivot. But you could also argue that if this technology is going to be used by the military anyway, wouldn't you want the people who care most about safety to be the ones building it? It is the Oppenheimer dilemma all over again. If we do not build the "safe" version, our adversaries will build the "unsafe" version.
I was just thinking of Oppenheimer. It is the exact same tension. If we do not build it, the other side will, and theirs will be worse. So we have to build it, but we have to try to make it as safe as possible.
And the other side is definitely building it. We know that China and Russia are investing billions into autonomous systems. They are looking for a competitive advantage, period. They are looking at "swarm intelligence" where hundreds of drones coordinate without any human oversight.
So, to answer Daniel’s question about the current state, it sounds like we are in this weird middle ground. We have the technical capability for fully autonomous flight and targeting, but we are still maintaining this thin layer of human oversight, even if that layer is getting thinner and thinner every day.
That is a perfect summary. The tech is there. The AI pilots are already better than humans in many scenarios. The computer vision is getting better at identifying targets than the human eye. The only thing holding back full autonomy right now is policy, ethics, and the fear of the unknown.
I want to go a bit deeper on the technical side of the drone piloting that Daniel asked about. You mentioned Reinforcement Learning, but how does that actually handle the transition from the simulator to the real world? I have heard about the "sim-to-real" gap. It seems like a huge hurdle for something as high-stakes as a military drone.
Oh, the sim-to-real gap is the bane of every robotics engineer's existence. You can train a model to be perfect in a digital world, but then you put it in the real world and the sun is at a slightly different angle, or there is a bit of dust on the lens, and the model completely loses its mind.
So how do they fix that?
A technique called "domain randomization." When you are training the AI in the simulator, you do not just give it one environment. You give it millions of variations. You randomly change the gravity, the friction of the air, the lighting, the colors of the objects. You basically make the simulator so crazy and unpredictable that the real world actually feels simple by comparison.
That is brilliant. It is like training a runner by making them run on sand, through water, and up ice, so that when they finally get to a paved track, it is easy.
Exactly. And they also use something called a "digital twin." They build a highly accurate digital model of the specific drone they are using, down to the exact weight of every screw and the latency of every motor. This allows the AI to learn the specific quirks of that machine.
And what about the output side? Daniel asked how it provides flight control feedback. Is it just sending signals to the rotors?
Usually, the AI is outputting what we call "setpoints." Instead of saying "move this motor by five volts," it says, "I want the drone to be at this specific orientation and this specific velocity in the next ten milliseconds." Then, a lower-level controller, which is usually a traditional non-AI system called a PID controller—that stands for Proportional-Integral-Derivative—handles the actual electrical signals to the motors to reach that setpoint.
Ah, so there is still a layer of traditional, reliable engineering at the very bottom of the stack.
Yes, and that is a crucial safety feature. It means that if the AI tries to do something physically impossible or dangerous, the low-level controller can sometimes act as a buffer. It is all about layers of defense.
This makes me think about the future of warfare in a way that is very different from the movies. It is not just about big robots. It is about thousands of small, cheap, intelligent things.
Swarm intelligence. That is the next frontier. Imagine a swarm of five hundred tiny drones, each the size of a bird. They are all running their own local AI pilots, but they are also communicating with each other through a model like Claude or a similar agentic system. They can coordinate a search, bypass defenses, and act as a single, distributed organism.
And if you shoot down ten of them, it does not matter. The rest of the swarm just re-adjusts.
Exactly. It makes traditional defense systems like missiles or anti-aircraft guns almost useless. You cannot fire a million-dollar missile at a hundred-dollar drone. The economics of warfare are being completely flipped on their head.
It feels like we are living through the biggest shift in military technology since the invention of gunpowder or the atomic bomb. And it is happening so fast that the public, and even the politicians, can barely keep up.
It is. And that is why I am glad Daniel sent this prompt. We need to be talking about this. We need to understand the difference between a language model that helps a commander think and a reinforcement learning agent that actually flies the weapon. They are both AI, but they represent very different risks.
So, what are the practical takeaways here? If someone is listening to this and feeling a bit overwhelmed, what should they keep in mind?
The first thing is to realize that AI in the military is not one single thing. It is a spectrum. On one end, you have administrative and logistical AI, which is mostly harmless and actually makes things more efficient. In the middle, you have intelligence and decision-support AI, like what Anthropic is doing with Palantir. This is where the ethical questions start to get serious. And on the far end, you have kinetic, autonomous weapons. That is the red line.
And the second takeaway is that the "human-in-the-loop" is becoming a very fragile concept. We need to be very careful that we do not let it become a meaningless term. We need to define what actual human judgment looks like in an age of millisecond decision-making. If a human only has half a second to say "yes" or "no," is that really judgment?
Absolutely. And finally, we should realize that the companies building this tech, like Anthropic, are now part of the national security infrastructure. Whether they like it or not, their safety research is now a matter of global stability. The "Constitutional AI" they developed for chatbots is now being tested in the most high-stakes environment imaginable.
It is a lot to process. I think I am going to need another coffee after this one.
I think we both will. But it is important. This is the world we are building. The "Weird Prompts" of today are the military doctrines of tomorrow.
Well, thank you, Herman. I always feel a bit more informed and a lot more concerned after talking to you about this stuff.
That is what I am here for, Corn. To keep us both awake at night.
You are doing a great job. And thank you to Daniel for sending this in. It is exactly the kind of thing we love to dig into, even if it is a bit heavy.
Definitely. If any of you listening have thoughts on this, or if you have a prompt of your own you want us to tackle, please get in touch. We love hearing from you.
You can find us at myweirdprompts dot com. We have a contact form there, and you can find the RSS feed for the show if you want to subscribe. And hey, if you have been enjoying our deep dives, please leave us a review on Spotify or wherever you listen to your podcasts. It really does help other people find the show, and we appreciate it more than you know.
It really does. We read all of them.
Alright, I think that is a wrap for today. This has been My Weird Prompts.
Until next time, stay curious.
Goodbye.