#1698: Can AI Models Represent Nations in Diplomacy?

Real projects are building AI agents trained on national laws and diplomatic archives to simulate negotiations.

0:000:00
Episode Details
Episode ID
MWP-1849
Published
Duration
20:55
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Xiaomi MiMo v2 Pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The question of whether AI can represent nations in diplomatic negotiations is no longer theoretical. Several real-world projects have built meaningful pieces of a sovereign AI architecture, revealing both surprising capabilities and fundamental limitations.

Beyond Chatbots with Flag Emojis

The core challenge goes far deeper than creating a multilingual chatbot. True sovereign AI requires fine-tuning base models on a nation's actual statutory law, parliamentary debates, treaty obligations, diplomatic correspondence, and cultural texts. This isn't retrieval-augmented generation over legal documents—it's deep fine-tuning where the model internalizes a country's regulatory philosophy as behavioral constraints, essentially writing a nation's constitution into the model's reward function.

Real Projects, Real Results

The NATO-affiliated AI for Peace initiative at Carnegie Mellon built a diplomatic negotiation simulator using fine-tuned LLaMA-2 models, each trained on specific countries' policy documents. In a fictional refugee crisis scenario, the Turkish agent model—trained on Ottoman-era migration law archives alongside modern policy—prioritized regional burden-sharing frameworks. The Swedish model, drawing on humanitarian intervention tradition, pushed for individual rights protections. The Japanese model emphasized procedural consensus before substantive commitments.

Crucially, these differences weren't just stylistic. When tested on novel scenarios with no direct precedent in training data, the models still produced outputs consistent with their national frameworks.

Singapore's Merlion system, trained on fifty years of parliamentary debates and trade agreement archives, revealed a different insight: the model generated mathematically optimal policy suggestions that were politically impossible. It exposed the gap between bureaucratic logic and democratic politics, showing where pure optimization breaks down in governance.

The European Commission's Europa-AGI pilot modeled German ordoliberalism versus French dirigisme in digital market negotiations. Rather than simple compromise, the models found third-way solutions neither tradition would have considered—because they applied regulatory logic without the cultural identity layer that constrains human negotiators.

The Engineering Challenges

The biggest finding across all projects is the digital sovereignty problem. Building a comprehensive training dataset for a medium-sized country with good data infrastructure requires eighteen to twenty-four months of data engineering. For countries with less digitized archives or unwritten legal precedent, it could take years.

Legal encoding presents its own nightmare. Models must handle constitutional hierarchies, contradictory statutes, and evolving interpretations. Some projects found models would pick the most recent interpretation and stick with it, while others produced incoherent outputs trying to satisfy all contradictory constraints simultaneously.

Scale remains a hard ceiling. Current multi-agent systems max out at roughly twelve to fifteen nations before attention mechanisms struggle. With all UN member states, you're looking at over twelve thousand bilateral relationships plus multilateral dynamics—far beyond what current transformer architectures can handle.

The tension between diplomatic identity and pure optimization logic may be the most provocative finding. Models unbound by historical baggage can propose solutions humans wouldn't consider, but whether this improves or undermines diplomacy depends on whether you view identity as a feature or a bug in international relations.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#1698: Can AI Models Represent Nations in Diplomacy?

Corn
So Daniel's prompt this time is asking whether anyone has actually built a serious proof-of-concept for an AI policy forum featuring personified nations. He wants real projects, not speculation. And honestly, Herman, this one sent me down a rabbit hole.
Herman
Mine too. And the answer is more interesting than I expected. By the way, today's episode is powered by Xiaomi MiMo v2 Pro, for those keeping score at home.
Corn
Fancy. I'm Corn, this is my brother Herman Poppleberry, and you're listening to My Weird Prompts. Let's get into it.
Herman
So the short answer to Daniel's question is yes, but with caveats. Nobody has built a full agentic United Nations where sovereign AI models negotiate treaties in real time. But several projects have built meaningful pieces of that architecture, and some of the findings are genuinely surprising.
Corn
And I think what makes Daniel's question sharp is the specificity. He's not asking about chatbots with flag emojis bolted on. He's asking about models actually fine-tuned on national legal corpora, diplomatic archives, cultural heritage datasets. That's a fundamentally different engineering problem.
Herman
Night and day. When people hear "sovereign AI," they often think of a ChatGPT wrapper that says "Bonjour" and talks about cheese. That's not what we're discussing. We're talking about taking a base model and training it on a nation's actual statutory law, its parliamentary debates, its treaty obligations, its diplomatic correspondence, its cultural texts, and encoding all of that as preference rankings and constraint hierarchies so the model behaves in ways that are authentically aligned with that nation's policy positions and legal traditions.
Corn
Constitutional AI techniques applied at the national level.
Herman
That's the core mechanism. You're literally writing a nation's constitution into the model's reward function. Not just RAG retrieval over legal documents, but deep fine-tuning where the model internalizes a country's regulatory philosophy as its own behavioral constraints.
Corn
Okay, so who's actually done this?
Herman
The project I found most substantive was the NATO-affiliated AI for Peace initiative at Carnegie Mellon's Human-Computer Interaction Institute. This ran from twenty twenty-three through twenty twenty-four, and they built a diplomatic negotiation simulator using fine-tuned LLaMA-2 models. Each agent was trained on a specific country's policy documents. They pulled from UNHCR refugee policy archives, Vienna Convention on Diplomatic Relations texts, and national-level immigration and asylum law.
Corn
And what happened when they actually ran negotiations?
Herman
This is where it gets fascinating. They ran a scenario involving a fictional refugee crisis, and the agent models from twelve different countries produced policy proposals. The Turkish agent model, which was fine-tuned on Ottoman-era migration law archives alongside modern Turkish policy, proposed solutions that differed significantly from the Swedish agent model, which drew on Sweden's extensive humanitarian intervention tradition.
Corn
So the models weren't just recombining generic policy language. They were actually reflecting distinct national approaches to the same problem.
Herman
That's the key finding. The outputs weren't just stylistically different. They were substantively different in ways that mirrored how actual diplomatic negotiations play out. The Turkish model prioritized regional burden-sharing frameworks. The Swedish model pushed for individual rights protections. The Japanese model emphasized procedural consensus before substantive commitments. These aren't things you'd get from a generic language model.
Corn
That's genuinely impressive. But I want to push on something. How do we know the model isn't just pattern-matching on the surface features of diplomatic language? Like, is it actually internalizing policy logic, or is it just picking up that Turkish documents tend to use certain phrasings?
Herman
That's the right question, and honestly, the CMU team acknowledged they couldn't fully distinguish between deep internalization and sophisticated pattern matching. But here's what suggests it's more than surface-level: when they tested the models on novel scenarios that had no direct precedent in the training data, the agents still produced outputs consistent with their national frameworks. The Swedish model still prioritized individual rights even in a hypothetical scenario about resource allocation that had nothing to do with refugees.
Corn
Interesting. Okay, what about other projects? Because CMU can't be the only ones thinking about this.
Herman
Singapore built something called Merlion through their GovTech AI Lab. This one is closer to what Daniel might find useful because it's more practically oriented. Merlion is a sovereign policy modeling system trained on fifty years of Singaporean parliamentary debates, statutory board reports, and trade agreement archives. The system simulates stakeholder reactions to proposed legislation.
Corn
Fifty years of parliamentary debates. That's a lot of data.
Herman
Singapore has unusually good data infrastructure for this kind of project. Their parliamentary records are comprehensive, digitized, and well-structured. Most countries can't say the same. And that's actually one of the biggest barriers to these projects, which we should get into.
Corn
We will. But tell me more about Merlion first. What did they actually find?
Herman
One of the most interesting emergent behaviors was that Merlion started generating policy suggestions that were mathematically optimal but politically impossible. The model would propose legislation that maximized economic efficiency or social welfare on paper, but that no elected official could actually implement because it would be electoral suicide.
Corn
So the model found the gap between bureaucratic logic and democratic politics.
Herman
That's a perfect way to frame it. Merlion revealed that there's a fundamental tension between what policy models can optimize for and what democratic systems can actually execute. The model doesn't understand political pressure, media cycles, coalition dynamics. It understands statutory frameworks and economic outcomes. And those two things diverge more often than you'd think.
Corn
That's actually a really useful finding, even if it's not the one they were hoping for. It tells you something about where pure optimization breaks down in governance.
Herman
The Europa-AGI pilot from the European Commission's AI4EU consortium is the third major project worth discussing. This one was particularly interesting because they created agent models representing different member states' regulatory philosophies. Specifically, they modeled German ordoliberalism versus French dirigisme to negotiate digital market regulations.
Corn
For listeners who aren't policy nerds, ordoliberalism is basically the German belief that markets need strong institutional frameworks but should otherwise be left alone, and dirigisme is the French tradition of active state direction of the economy.
Herman
That's the gist. And what happened when the German and French agent models negotiated is they didn't just compromise, which is what you'd expect from a diplomatic simulation. They found third-way solutions that neither human tradition would have considered.
Corn
Because the models weren't bound by historical baggage.
Herman
The models had internalized the logic of each tradition but weren't constrained by the cultural identity attached to those positions. So the German model could propose state intervention when the math supported it, and the French model could propose deregulation when efficiency arguments were strong. They weren't performing national identity. They were applying national regulatory logic without the tribal attachment.
Corn
Which is either incredibly useful or deeply terrifying, depending on your perspective.
Herman
Both. It raises the question of whether diplomatic negotiation would be better or worse without the identity layer. Some would argue that identity and historical commitment are features, not bugs, in diplomacy. They create predictability and trust.
Corn
Sure, but they also create deadlock. How many international negotiations have stalled because neither side could back down without losing face?
Herman
That's the tension. So those are the real projects that exist—now let's talk about what they actually do under the hood, because I suspect the engineering challenges are where most of the interesting stuff lives.
Corn
Good transition. What's the biggest finding across all these projects?
Herman
The biggest finding is the digital sovereignty problem. Most of them hit a wall at data access. National archives aren't fully digitized. Diplomatic cables are classified. Cultural heritage datasets are fragmented across institutions with different access policies, different formats, different languages. Getting clean, structured training data from a nation's actual legal and policy corpus is an enormous lift.
Corn
How enormous? Give me a sense of scale.
Herman
The CMU team estimated that for a medium-sized country with reasonably good data infrastructure, building a comprehensive training dataset for a sovereign policy model would take eighteen to twenty-four months of dedicated data engineering. And that's before you even start fine-tuning. For countries with less digitized archives, or with legal systems that rely heavily on unwritten precedent, it could take years.
Corn
And the legal encoding problem is its own nightmare, right? Because you're not just collecting documents. You need to turn a nation's legal framework into something a model can use as actual constraints.
Herman
That's the constitutional AI layer, and it's where most projects struggle. You need to encode legal hierarchies, where constitutional law supersedes statutory law, which supersedes regulatory guidance. You need to handle contradictions, because legal systems are full of them. You need to account for evolving interpretation, because the same legal text can mean different things in different decades. And you need to do all of this in a way that produces coherent model behavior rather than just confused outputs.
Corn
It's like trying to teach someone the rules of a game where the rulebook contradicts itself every few pages and the referees keep changing their minds.
Herman
That's actually a pretty good analogy. And the models handle this inconsistency in interesting ways. Some projects found that the models would essentially pick the most recent interpretation and stick with it. Others found that the models would generate outputs that tried to satisfy all contradictory constraints simultaneously, which produced incoherent results.
Corn
What about the scale problem? You mentioned earlier that these projects max out at a certain number of agent nations.
Herman
Current multi-agent systems max out at roughly twelve to fifteen agent nations before the attention mechanisms in transformer architectures start struggling. Diplomatic relationships are combinatorially explosive. With twelve nations, you have sixty-six bilateral relationships to model. With twenty nations, that jumps to a hundred ninety. With all UN member states, you're looking at over twelve thousand bilateral relationships, plus multilateral dynamics, plus alliance structures, plus historical grievances.
Corn
The attention budget just can't handle it.
Herman
Not with current architectures. You'd need either fundamentally different attention mechanisms, or you'd need to build hierarchical agent structures where regional blocs negotiate as groups before individual nations engage. Some researchers are exploring this, but nobody has demonstrated it working at full UN scale.
Corn
And there's the alignment tax problem you mentioned earlier.
Herman
This is the tradeoff that I find most intellectually interesting. When you fine-tune a model on a nation's actual policy positions, which may be contradictory, self-serving, or evolving, you get models that are less aligned with universal human values but more authentic to that nation's behavior. You're trading general alignment for specific authenticity.
Corn
So you're building a model that's a more faithful representative of a country, but that country might have positions we'd consider problematic.
Herman
And that's the fundamental tension in the entire concept. If you want a genuine simulation of how nations actually behave, you need models that reproduce the full range of national behavior, including the parts that conflict with universal norms. If you align the models too strongly with universal values, you lose the authenticity that makes the simulation useful for understanding real diplomatic dynamics.
Corn
It's like the difference between a documentary and a feel-good movie. The documentary shows you reality, warts and all. The feel-good movie makes you comfortable but doesn't prepare you for what's actually out there.
Herman
That framing works. And it connects to the broader sovereignty question. Nations pursuing sovereign AI want models that reflect their values, not some aggregated global average. India wants models that understand Indian constitutional law and cultural context. Israel wants models that understand its security environment and legal traditions. Taiwan wants models that reflect its unique geopolitical position. These aren't unreasonable demands, but they do create a landscape where different sovereign models might produce fundamentally incompatible outputs on the same question.
Corn
Which is, ironically, exactly what happens in real diplomacy.
Herman
The simulation reproduces the problem it's trying to solve. There's something almost poetic about that.
Corn
So let's talk about what someone like Daniel, who's already built an experimental version of this, should actually take away from all this research. What are the practical implications?
Herman
The first practical takeaway is to focus on narrow policy domains rather than trying to model entire national identities. The projects that worked best were ones that targeted specific areas like refugee policy, digital market regulation, or trade agreements. When you try to model a country's entire policy posture, you run into the data access problem, the legal encoding problem, and the scale problem all at once. Narrow it down and you can make real progress.
Corn
What's the second?
Herman
The real bottleneck is not model capability. The models are powerful enough to do interesting things with this kind of data. The bottleneck is creating high-quality, digitized, structured training data from national archives and legal systems. If you're investing time in this kind of project, invest it in data engineering, not in model architecture.
Corn
Which is boring but true for most machine learning projects.
Herman
It's the unglamorous eighty percent of the work. And DiploLM's release of two million plus diplomatic cables and treaty texts in late twenty twenty-five means the diplomatic corpus is now significantly more accessible. If Daniel's project involves diplomatic negotiation simulation specifically, that dataset is probably the single most valuable resource available right now.
Corn
What about the constitutional AI encoding? Any practical advice there?
Herman
Start with a single legal framework and encode it as a hierarchy of constraints. Constitutional provisions at the top, then statutory law, then regulatory guidance, then precedent. Make the hierarchy explicit in the model's preference rankings. And don't try to handle contradictions automatically at first. Flag them, present them to human reviewers, and build your constraint set iteratively.
Corn
So basically, treat it like a software engineering problem, not a magic wand.
Herman
That's the mindset that produces results. The projects that failed were the ones that assumed you could just throw documents at a model and get authentic national behavior out. The ones that succeeded treated it as a structured engineering problem with clear data pipelines, explicit constraint hierarchies, and iterative validation.
Corn
One more thing I want to touch on. The White House released a national AI policy framework with seven legislative pillars for the one hundred nineteenth Congress on March twentieth, twenty twenty-six. And Daniel flagged this in his notes because none of the seven pillars include any mechanism to verify that AI platforms actually enforce the protections described. How does that connect to what we're discussing?
Herman
It connects directly because if you're building sovereign AI models that encode national policy positions, you need verification mechanisms to ensure the model actually behaves according to those encoded constraints. The White House framework describes what AI should protect but has no teeth for enforcement. That same gap exists in every sovereign AI project we've discussed. How do you verify that a model fine-tuned on Japanese legal texts actually behaves consistently with Japanese legal principles? There's no standard methodology for that verification yet.
Corn
So we're building increasingly sophisticated sovereign models with no reliable way to audit whether they're actually doing what we trained them to do.
Herman
That's the state of the field right now. And it's a gap that someone with Daniel's skills in prompt engineering and open-source development could potentially help fill. The verification and auditing layer for sovereign AI models is wide open.
Corn
Okay, let's bring this together. What's the bottom line? Has anyone built a true agentic UN?
Herman
Nobody has built a complete one yet. But several projects are converging on that architecture from different directions. CMU built the negotiation layer. Singapore built the policy modeling layer. The European Commission built the regulatory philosophy comparison layer. DiploLM built the data foundation. The pieces exist. Nobody has assembled them yet.
Corn
And the assembly problem is hard because of the scale, data access, and alignment issues we've been talking about.
Herman
The pieces don't fit together neatly yet. But the trajectory is clear. Someone will attempt a serious multi-agent diplomatic simulation at meaningful scale within the next few years. The question is whether it'll be an open research project or a classified government tool.
Corn
I know which one I'd prefer.
Herman
Me too. Open research produces better science because it can be scrutinized and replicated. Classified tools produce better secrecy but worse understanding.
Corn
Alright, practical time. If you're listening and you want to build something like this, here's what we'd suggest. Pick a narrow policy domain. Get your data engineering right first. Use DiploLM if diplomatic negotiation is your focus. Encode legal frameworks as explicit constraint hierarchies. And don't expect to model entire national identities out of the gate.
Herman
And if you find yourself generating policy suggestions that are mathematically optimal but politically impossible, that's actually valuable information. It tells you where the gap is between what policy can achieve and what politics can execute.
Corn
That gap is where most of the interesting work lives.
Herman
One last thought. The question Daniel asked at the end of his prompt is the one that stuck with me. If you could fine-tune a model on your own country's complete legal and cultural corpus, what unique capabilities would emerge? We don't know the answer yet. But the early results suggest the capabilities would surprise us. The CMU project found solutions neither tradition would have considered. Merlion found optimization paths human policymakers missed. There's something in the data that we haven't fully extracted yet.
Corn
And that's a reason to keep building, even if the full agentic UN is still years away.
Herman
The journey produces insight at every step. You don't need the finished product to learn something valuable.
Corn
Thanks as always to our producer Hilbert Flumingtop. Big thanks to Modal for providing the GPU credits that power this show. This has been My Weird Prompts. If you're enjoying the show, a quick review on your podcast app helps us reach new listeners. See you next time.
Herman
See you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.