#672: The Silicon Soldier: Anthropic, Drones, and AI Warfare

Herman and Corn break down Anthropic’s move into defense and the technical reality of how AI actually pilots drones on the modern battlefield.

0:000:00
Episode Details
Published
Duration
30:14
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

In a recent episode of My Weird Prompts, hosts Herman Poppleberry and Corn tackled one of the most significant shifts in the artificial intelligence landscape to date: the integration of "safety-first" AI models into the heart of the United States military infrastructure. The discussion was sparked by the disclosure that Anthropic, the creator of the Claude models, has entered into a massive partnership with Palantir and Amazon Web Services (AWS) to deploy its technology on highly secure, classified networks like the Secret Internet Protocol Router Network (SIPRNet).

The "Safety" Brand Meets the Pentagon

The episode begins by addressing the irony of Anthropic’s entry into the defense sector. Known for its "Constitutional AI" and a brand built on safety and ethics, Anthropic’s move signals a pivot in how "safety" is marketed. As Herman points out, for the Pentagon, "safety" translates to "reliability." In the high-stakes environment of national defense, the military is less interested in a chatbot that avoids being rude and more interested in a model that is "steerable"—one that won't hallucinate a fake enemy or ignore Rules of Engagement due to a prompt injection. By positioning Claude as a model with a built-in moral and procedural compass, Anthropic has made itself an ideal candidate for operationalizing AI in conflict zones.

The Mechanics of an AI Pilot

One of the core technical questions discussed in the episode is how AI actually "flies" a drone. Corn and Herman clarify a common misconception: large language models (LLMs) like Claude are not the ones physically controlling the aircraft. Herman explains that the latency inherent in LLMs—which predict tokens one by one—is far too slow for the millisecond reactions required for flight.

Instead, AI piloting relies on a two-part architecture. The first is the perception layer, typically powered by Convolutional Neural Networks (CNNs) or Vision Transformers. These systems perform "semantic segmentation," turning raw pixels from a video feed into a conceptual map of walls, vehicles, and obstacles. The second part is the Reinforcement Learning (RL) agent. Unlike traditional software, these agents aren't programmed with "if-then" rules; they are trained in high-fidelity simulators where they learn through millions of trials. This process allows them to develop a "policy"—a mathematical function that translates sensor data into motor voltage adjustments hundreds of times per second.

Herman cites the "Swift AI" research from the University of Zurich as a prime example, where an AI outperformed world-champion drone racers by calculating aerodynamics and physics in ways human pilots cannot conceive.

Strategic vs. Tactical AI

If Claude isn’t pulling the trigger or steering the rotors, what is its role? The hosts distinguish between the "tactical" AI (the pilot) and the "operational" or "agentic" AI (the commander). Claude acts as the connective tissue, capable of synthesizing massive amounts of data—satellite imagery, intelligence reports, and multiple drone feeds—to provide high-level reasoning.

In this capacity, Claude serves as a mission planner. It can identify an 80% probability of an ambush and suggest a reroute, or coordinate a swarm of smaller, faster drones to carry out a complex search-and-rescue operation. This allows human commanders to focus on high-level strategy while the AI handles the data-heavy synthesis that would overwhelm a human brain.

The Autonomy Spectrum and the "Human-in-the-Loop"

The conversation inevitably turns to the ethics of autonomous weapons. Herman explains the Department of Defense’s Directive 3000.09, which mandates "appropriate levels of human judgment" over the use of force. However, the hosts argue that the definitions of "human-in-the-loop" (active authorization), "human-on-the-loop" (monitoring with the ability to intervene), and "human-out-of-the-loop" (full autonomy) are becoming increasingly blurred.

Using the example of defensive systems like the Iron Dome, Herman notes that humans have already accepted a degree of autonomy because the speed of incoming threats necessitates it. The real tension lies in offensive systems. While the U.S. currently maintains that a human must make the final lethal decision, systems like the Israeli-made Harpy drone—a "loitering munition" that searches for specific radar signatures to destroy—show that "fire-and-forget" autonomy is already a reality.

The Risk of the "Flash War"

The episode concludes with a sobering look at the risks of this technological escalation. Just as the stock market has experienced "flash crashes" due to high-frequency trading algorithms interacting in unforeseen ways, the hosts warn of a "flash war." If two opposing militaries deploy thousands of autonomous systems that make decisions in microseconds, a conflict could escalate beyond human control before a commander even realizes a shot has been fired.

The challenge for companies like Anthropic is the "black box" problem. While Constitutional AI aims to bake the Geneva Conventions into the code, the complexity of these models means that if a mistake happens—if a school bus is misidentified as a tank—accountability becomes a legal and moral vacuum. As AI moves from our screens to the front lines, the stakes of "safety" have never been higher.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #672: The Silicon Soldier: Anthropic, Drones, and AI Warfare

Daniel Daniel's Prompt
Daniel
The question of the military's use of AI has gained attention lately with the disclosure that Anthropic is being used by the U.S. military through a partnership with Palantir. This involves use on secure networks like SIPRNet. While military technology often precedes civilian tech, the choice of Anthropic is interesting. It’s a closed-source model with a strong safety record, yet its strengths in agentic AI and decision-making likely make it an attractive tool.

There is a significant difference between using AI for data synthesis and using it to pilot drones or operate weapons. I have two questions:

First, regarding AI pilots: what kind of AI model is used to pilot a drone? How is a model adapted to take a video feed as an input and provide flight control feedback as an output?

Second, what is the current state of AI-operated weapon systems? Are they fully autonomous, or are they still "human-in-the-loop" systems where AI assists with detection and decision-making, but a human remains in control of the final action?
Corn
Welcome back to My Weird Prompts. I am Corn, and I have been staring at my laptop screen for about three hours today just trying to wrap my head around the latest news. It is Tuesday, February seventeenth, twenty twenty-six, and the headlines are moving faster than I can process. It is a heavy one today, but luckily, I am not alone in this house of ours.
Herman
Herman Poppleberry, at your service. And yes, it is a heavy one. I think our housemate Daniel really outdid himself with this prompt. He has been watching the same headlines we have, and it is a fascinating, if slightly terrifying, intersection of safety-first artificial intelligence and the most intense application of technology there is. We are talking about the literal front lines of modern conflict.
Corn
It really is. Daniel was asking about this recent disclosure that Anthropic, the company that created the Claude models, is now being used by the United States military through a massive partnership with Palantir and Amazon Web Services. And they are deploying it on these incredibly secure networks like the Secret Internet Protocol Router Network, or SIPRNet.
Herman
Right. And for anyone who does not know, SIPRNet is basically the military’s version of the internet, but totally isolated for classified information. Seeing a model like Claude three point five Sonnet being deployed there is a massive shift. Anthropic has always positioned itself as the safety company. They talk about Constitutional AI and making sure their models have a moral compass, so to speak. Seeing them enter the defense space is a huge signal that the "safety" brand is now being marketed as "reliability" for the Pentagon.
Corn
Exactly. And Daniel has these two really specific, technical questions that I think we need to peel apart. He wants to know what kind of model actually pilots a drone, how you turn a video feed into flight controls, and then, the big one, the current state of autonomous weapons. Are we looking at Skynet, or is there still a person with their finger on the button?
Herman
I love that he went straight for the mechanics. Because you are right, Corn, there is a massive difference between what Claude does, which is processing language and making high-level decisions, and what a flight controller does, which is reacting in milliseconds to wind shear or a moving target. We are talking about the difference between a General planning a campaign and a pilot pulling a high-G turn.
Corn
Let us start there then. When people hear AI pilot, they might think of Claude sitting in a virtual cockpit, reading the dials and talking to air traffic control. But that is not really how it works, is it?
Herman
Not at all. If you tried to use a Large Language Model to fly a drone, you would crash in about half a second. The latency alone would kill you. LLMs are autoregressive, meaning they predict the next token one by one. That is way too slow for flight. When we talk about AI pilots, we are usually talking about a completely different architecture, often based on Deep Reinforcement Learning and Computer Vision.
Corn
Okay, so walk me through that. If I have a drone and I want it to fly itself based on a video feed, what is the brain actually doing?
Herman
Think of it as a two-part system. First, you have the perception layer. This is usually a Convolutional Neural Network, or more recently, a Vision Transformer. Its job is to take that raw video feed, which is just millions of pixels, and turn it into meaning. It is not just "seeing" a picture; it is performing object detection, semantic segmentation, and depth estimation. It says, "that cluster of gray pixels is a concrete wall," and "that moving green shape is a vehicle."
Corn
And then it has to decide what to do with that information.
Herman
Exactly. That is where the Reinforcement Learning agent comes in. Instead of being programmed with a set of rules like "if you see a wall, turn left," the agent is trained in a high-fidelity simulator. We give it a goal, like "reach this coordinate while avoiding obstacles," and we give it a reward every time it gets closer and a massive penalty every time it crashes.
Corn
So it learns through trial and error, but millions of times over in a virtual world.
Herman
Precisely. It develops what we call a "policy." This policy is a mathematical function that takes the state of the world as an input—what it sees, its current speed, its orientation—and outputs the exact control signals for the motors. We are talking about adjusting the voltage to four different rotors hundreds of times per second. It is a continuous feedback loop that bypasses human-readable language entirely.
Corn
I remember seeing that research from the University of Zurich—the Swift AI—where their pilot beat the world champion drone racers. They were using this exact setup. But the thing that blew my mind was that the AI was taking paths that humans literally could not conceive of because it was calculating the physics of the air in real time.
Herman
That is the power of it. And to answer Daniel’s question about how it is adapted, it is all about sensor fusion. The model is not just looking at video. It is also taking in data from an Inertial Measurement Unit, which measures acceleration and rotation, and maybe a Global Positioning System and a barometer. The adaptation happens in the training phase where you force the model to handle "noise." You tell it, "okay, now fly but imagine the camera is slightly blurry, or there is a fifty mile per hour gust of wind, or the GPS signal is being jammed." By the time it hits a real drone, it is incredibly robust.
Corn
So, if that is the pilot, where does Anthropic fit in? Why is the military so keen on Claude if it is not the one doing the actual flying?
Herman
This is where we get into the distinction between tactical and operational AI. Claude is an agentic model. It is great at synthesis and reasoning. Imagine a commander who has to look at five different drone feeds, three satellite images, and a hundred pages of intelligence reports. Claude can synthesize all of that in seconds. It can say, "Based on these feeds, there is an eighty percent probability of an ambush at coordinates X and Y. I suggest rerouting the drones to the north."
Corn
So it is more like the mission planner or the intelligence officer rather than the guy with his hands on the stick.
Herman
Exactly. It is the connective tissue. It can take a high-level command like, "Search this village for a specific vehicle," and then it can coordinate the smaller, faster AI pilots to carry out that task. And because Anthropic has focused so much on "Constitutional AI," the military sees it as a more reliable partner. They do not want a model that "hallucinates" a fake enemy or ignores a direct order because of a weird prompt injection. They want a model that follows the Rules of Engagement to the letter.
Corn
That makes sense. But it leads us right into Daniel’s second question, which is the one that everyone is worried about. The weapon systems. What is the current state of play there? Are we already at the point where a machine makes the decision to fire?
Herman
This is the most sensitive area of defense technology right now. Officially, the United States Department of Defense follows Directive three thousand point zero nine, which was updated recently to account for these new models. It basically mandates that there must be "appropriate levels of human judgment" over the use of force. We call this "human-in-the-loop."
Corn
"Appropriate levels of human judgment." That sounds like a phrase written by a lot of lawyers to give themselves some wiggle room.
Herman
You hit the nail on the head. There is "human-in-the-loop," where a human has to actively authorize every single kinetic action. Then there is "human-on-the-loop," where the system can act autonomously, but a human is monitoring it and can intervene at any time. And then there is "human-out-of-the-loop," which is full autonomy.
Corn
And where are we actually? Not just in the policy papers, but on the ground in twenty twenty-six?
Herman
We are firmly in the transition between "in-the-loop" and "on-the-loop." Take a system like the Iron Dome or the newer David’s Sling. They are highly automated. When a rocket is fired, the system calculates the trajectory, decides if it is a threat to a populated area, and prepares an interceptor in a matter of seconds. A human is there to oversee it, but the speed of the engagement is so fast that the human is really more of a safety switch than an active pilot.
Corn
Right, because if you waited for a human to look at the screen and say, "Yes, that is a rocket, please fire," the rocket would have already hit.
Herman
Exactly. So for defensive systems, we have accepted a high degree of autonomy for a long time. But for offensive systems, like a drone carrying a missile, the line is much sharper. Currently, for the vast majority of U.S. systems, a human still has to look at a target on a screen and pull a trigger or click a mouse. The AI might find the target, it might track the target, and it might even fly the drone to the target, but the final "lethal" decision is human.
Corn
But Daniel mentioned the Harpy drone. I have read about that. That is an Israeli-made system that is often cited as one of the first truly autonomous weapons. How does that work?
Herman
The Harpy is what we call a loitering munition. It is basically a drone that is also a bomb. You launch it, and it flies over an area. It is programmed to look for specific radar signatures, like those from an enemy air defense system. If it finds one, it dives on it and destroys it. Once it is launched, it does not necessarily need a human to tell it to strike. It is searching for a pre-defined signature.
Corn
So that is essentially a fire-and-forget autonomous weapon.
Herman
It is. But the military would argue that the "human judgment" happened when they programmed the drone and launched it into a specific area where they knew only enemy radars would be active. It is a fine line. However, what we are seeing now with the U.S. military’s "Replicator" initiative—which aims to deploy thousands of cheap, autonomous systems—is a shift toward even more complex decision-making.
Corn
This is where it gets scary for me. If the AI is not just looking for a radar signature, but is instead using a vision model to identify people or vehicles based on fuzzy criteria, the margin for error grows, doesn't it?
Herman
It does. And that is why the conversation about AI safety is so different in a military context. Most people think of AI safety as preventing a chatbot from saying something rude. In the military, AI safety is about making sure the model does not misidentify a school bus as a tank. This is where Anthropic’s "Constitutional AI" comes in. They are trying to bake the Geneva Conventions and specific Rules of Engagement directly into the model's architecture.
Corn
I think there is this misconception that the military wants the most aggressive AI possible. But if you are a general, the last thing you want is an unpredictable weapon. You want something that follows the rules perfectly every single time.
Herman
That is exactly why they are interested in Anthropic. If you can prove that your model is "steerable"—meaning it won't deviate from its core principles even in the chaos of battle—that is worth more than any fancy flight maneuver. But the risk, of course, is the "black box" problem. If the AI makes a mistake, who is responsible? You cannot court-martial an algorithm.
Corn
Right, and if you have a thousand drones all making autonomous decisions in a split second, you could have an accidental escalation of a conflict before any human even knows what is happening. We have seen "flash crashes" in the stock market because of high-frequency trading algorithms. Imagine a "flash war."
Herman
That is the nightmare scenario. And it is why there is such a push for international norms. But here is the reality, Corn. We are already seeing AI-assisted targeting being used in active conflicts today. There were reports about the "Lavender" system and "Where's Daddy?" systems used in Gaza, which allegedly used AI to identify thousands of potential targets for human review. Even if there is a human-in-the-loop, if that human is just rubber-stamping a list of a hundred targets every hour because the AI is "usually right," are they really exercising judgment?
Corn
It becomes a checkbox. The human is just a legal fig leaf at that point.
Herman
Exactly. And that is the danger of the speed of AI. It moves so much faster than human cognition that we risk becoming the slow, weak link in the chain. Eventually, we might get pushed out of the loop entirely because we are too slow to be useful in a modern electronic warfare environment.
Corn
Let us circle back to the Claude and Palantir partnership for a second. Palantir has this platform called AIP, the Artificial Intelligence Platform. They have these demos where they show a commander talking to a chatbot. The commander says, "Show me all enemy movements in this sector," and the AI highlights them on a map. Then the commander says, "What are my options for neutralizing that artillery battery?" And the AI lists three different plans, including the probability of success and the estimated collateral damage for each.
Herman
Right. That is the agentic part. And notice that in that scenario, the AI is not pulling the trigger. It is acting as a "force multiplier" for the commander. It is giving them better information, faster. That is the immediate use case. It is data synthesis on steroids. It can read through intercepted communications, satellite imagery, and ground reports all at once.
Corn
But how long until the commander just says, "Okay, execute the option with the highest probability of success and the lowest collateral damage, and let me know when it is done"?
Herman
That is the slippery slope. And that is why the transparency of these models is so important. If the military is using closed-source models like Claude, we have to trust that the guardrails are actually there.
Corn
It is interesting that the military is choosing closed-source. Usually, they want to own every piece of the tech. Why go with Anthropic?
Herman
I think it is because the pace of development is so fast that even the Pentagon cannot keep up. If they tried to build their own Large Language Model from scratch, it would be two generations behind by the time it was deployed. By partnering with Anthropic via Palantir and using AWS’s secure cloud, they get the cutting edge immediately. And because it is on SIPRNet, the data stays in their walled garden. Anthropic is not getting the military’s secrets to train Claude four or five.
Corn
At least, that is what the contract says. But it still feels like a major shift for a company that was founded by people who left OpenAI specifically because they were worried about the safety and commercialization of AI.
Herman
It is a massive pivot. But you could also argue that if this technology is going to be used by the military anyway, wouldn't you want the people who care most about safety to be the ones building it? It is the Oppenheimer dilemma all over again. If we do not build the "safe" version, our adversaries will build the "unsafe" version.
Corn
I was just thinking of Oppenheimer. It is the exact same tension. If we do not build it, the other side will, and theirs will be worse. So we have to build it, but we have to try to make it as safe as possible.
Herman
And the other side is definitely building it. We know that China and Russia are investing billions into autonomous systems. They are looking for a competitive advantage, period. They are looking at "swarm intelligence" where hundreds of drones coordinate without any human oversight.
Corn
So, to answer Daniel’s question about the current state, it sounds like we are in this weird middle ground. We have the technical capability for fully autonomous flight and targeting, but we are still maintaining this thin layer of human oversight, even if that layer is getting thinner and thinner every day.
Herman
That is a perfect summary. The tech is there. The AI pilots are already better than humans in many scenarios. The computer vision is getting better at identifying targets than the human eye. The only thing holding back full autonomy right now is policy, ethics, and the fear of the unknown.
Corn
I want to go a bit deeper on the technical side of the drone piloting that Daniel asked about. You mentioned Reinforcement Learning, but how does that actually handle the transition from the simulator to the real world? I have heard about the "sim-to-real" gap. It seems like a huge hurdle for something as high-stakes as a military drone.
Herman
Oh, the sim-to-real gap is the bane of every robotics engineer's existence. You can train a model to be perfect in a digital world, but then you put it in the real world and the sun is at a slightly different angle, or there is a bit of dust on the lens, and the model completely loses its mind.
Corn
So how do they fix that?
Herman
A technique called "domain randomization." When you are training the AI in the simulator, you do not just give it one environment. You give it millions of variations. You randomly change the gravity, the friction of the air, the lighting, the colors of the objects. You basically make the simulator so crazy and unpredictable that the real world actually feels simple by comparison.
Corn
That is brilliant. It is like training a runner by making them run on sand, through water, and up ice, so that when they finally get to a paved track, it is easy.
Herman
Exactly. And they also use something called a "digital twin." They build a highly accurate digital model of the specific drone they are using, down to the exact weight of every screw and the latency of every motor. This allows the AI to learn the specific quirks of that machine.
Corn
And what about the output side? Daniel asked how it provides flight control feedback. Is it just sending signals to the rotors?
Herman
Usually, the AI is outputting what we call "setpoints." Instead of saying "move this motor by five volts," it says, "I want the drone to be at this specific orientation and this specific velocity in the next ten milliseconds." Then, a lower-level controller, which is usually a traditional non-AI system called a PID controller—that stands for Proportional-Integral-Derivative—handles the actual electrical signals to the motors to reach that setpoint.
Corn
Ah, so there is still a layer of traditional, reliable engineering at the very bottom of the stack.
Herman
Yes, and that is a crucial safety feature. It means that if the AI tries to do something physically impossible or dangerous, the low-level controller can sometimes act as a buffer. It is all about layers of defense.
Corn
This makes me think about the future of warfare in a way that is very different from the movies. It is not just about big robots. It is about thousands of small, cheap, intelligent things.
Herman
Swarm intelligence. That is the next frontier. Imagine a swarm of five hundred tiny drones, each the size of a bird. They are all running their own local AI pilots, but they are also communicating with each other through a model like Claude or a similar agentic system. They can coordinate a search, bypass defenses, and act as a single, distributed organism.
Corn
And if you shoot down ten of them, it does not matter. The rest of the swarm just re-adjusts.
Herman
Exactly. It makes traditional defense systems like missiles or anti-aircraft guns almost useless. You cannot fire a million-dollar missile at a hundred-dollar drone. The economics of warfare are being completely flipped on their head.
Corn
It feels like we are living through the biggest shift in military technology since the invention of gunpowder or the atomic bomb. And it is happening so fast that the public, and even the politicians, can barely keep up.
Herman
It is. And that is why I am glad Daniel sent this prompt. We need to be talking about this. We need to understand the difference between a language model that helps a commander think and a reinforcement learning agent that actually flies the weapon. They are both AI, but they represent very different risks.
Corn
So, what are the practical takeaways here? If someone is listening to this and feeling a bit overwhelmed, what should they keep in mind?
Herman
The first thing is to realize that AI in the military is not one single thing. It is a spectrum. On one end, you have administrative and logistical AI, which is mostly harmless and actually makes things more efficient. In the middle, you have intelligence and decision-support AI, like what Anthropic is doing with Palantir. This is where the ethical questions start to get serious. And on the far end, you have kinetic, autonomous weapons. That is the red line.
Corn
And the second takeaway is that the "human-in-the-loop" is becoming a very fragile concept. We need to be very careful that we do not let it become a meaningless term. We need to define what actual human judgment looks like in an age of millisecond decision-making. If a human only has half a second to say "yes" or "no," is that really judgment?
Herman
Absolutely. And finally, we should realize that the companies building this tech, like Anthropic, are now part of the national security infrastructure. Whether they like it or not, their safety research is now a matter of global stability. The "Constitutional AI" they developed for chatbots is now being tested in the most high-stakes environment imaginable.
Corn
It is a lot to process. I think I am going to need another coffee after this one.
Herman
I think we both will. But it is important. This is the world we are building. The "Weird Prompts" of today are the military doctrines of tomorrow.
Corn
Well, thank you, Herman. I always feel a bit more informed and a lot more concerned after talking to you about this stuff.
Herman
That is what I am here for, Corn. To keep us both awake at night.
Corn
You are doing a great job. And thank you to Daniel for sending this in. It is exactly the kind of thing we love to dig into, even if it is a bit heavy.
Herman
Definitely. If any of you listening have thoughts on this, or if you have a prompt of your own you want us to tackle, please get in touch. We love hearing from you.
Corn
You can find us at myweirdprompts dot com. We have a contact form there, and you can find the RSS feed for the show if you want to subscribe. And hey, if you have been enjoying our deep dives, please leave us a review on Spotify or wherever you listen to your podcasts. It really does help other people find the show, and we appreciate it more than you know.
Herman
It really does. We read all of them.
Corn
Alright, I think that is a wrap for today. This has been My Weird Prompts.
Herman
Until next time, stay curious.
Corn
Goodbye.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.