#1242: The Physical Backbone: Rebuilding the Internet for AI

Forget the "cloud" metaphor. Explore the massive physical pipes, Tier-1 providers, and new fiber tech powering the 2026 AI revolution.

0:000:00
Episode Details
Published
Duration
19:39
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The "cloud" is a powerful metaphor, but it often masks the heavy, expensive, and fragile physical infrastructure that makes the digital world possible. In reality, the internet is a vast plumbing system of glass and silicon. As of 2026, this system is undergoing its most significant transformation since its inception, driven primarily by the unprecedented data demands of artificial intelligence.

The Tier-One Elite and the Barter System

At the heart of the global internet lies the Tier-one providers—an exclusive group of companies like AT&T, Lumen, and NTT. These entities own the massive fiber-optic networks that span continents and oceans. The stability of the web relies on "settlement-free peering," a high-stakes barter system where these giants exchange traffic for free because the value of their networks is symmetrical. If you are not in this club, you are a customer paying for transit. However, this hierarchy is shifting as hyperscalers like Amazon and Google invest hundreds of billions into their own private backbones, becoming Tier-one entities in their own right.

From North-South to East-West Traffic

The rise of AI has fundamentally changed how data moves. Traditional internet traffic was "North-South," meaning it moved from a server to an end-user. Today, the explosion of GPU clusters has shifted the focus to "East-West" traffic—massive amounts of data moving between servers within and between data centers. To handle this, the industry is moving toward 800-gigabit and 1.6-terabit Ethernet interfaces. The hardware required to manage this is equally massive; core routers have evolved into industrial-sized machines using specialized ASICs to switch multi-terabit traffic with near-zero latency.

Breaking the Speed of Glass

One of the most exciting physical innovations is the shift toward hollow-core fiber. Standard fiber-optic cables use a solid glass core, which actually slows light down by about 30% compared to a vacuum. In the world of real-time AI inference, that 30% latency is a major bottleneck. Hollow-core fiber allows light to travel through air, effectively "speeding up" the internet. This technology is essential for the distributed data center models used by hyperscalers, where multiple buildings must act as a single, cohesive supercomputer.

The Rise of Local Interconnects

While the backbone connects the world, Internet Exchange Points (IXPs) are becoming the front lines of digital sovereignty. By allowing local ISPs and content providers to swap traffic directly, IXPs reduce reliance on expensive international links. Brazil’s IX.br has recently surpassed major European hubs in throughput, highlighting a trend toward decentralization. This local peering is a vital safety valve, preventing the global backbone from bursting under the weight of AI-generated traffic.

As the physical map of the internet grows more complex—with the BGP routing table now exceeding one million entries—the line between the public internet and private hyperscaler networks is blurring. The "cloud" is no longer just a place where we store data; it is a specialized, high-performance engine being rebuilt in real-time to power the next generation of intelligence.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1242: The Physical Backbone: Rebuilding the Internet for AI

Daniel Daniel's Prompt
Daniel
Custom topic: We've discussed the importance of subterranean cables in connecting the internet in various episodes. But not specifically the internet backbone itself. We often imagine the internet to be an amorphou | Context: ## Current Events Context (as of March 15, 2026)

### Recent Developments

- AWS $200B capex plan for 2026: AWS is executing a $200 billion capital expenditure plan, adding 3.8 GW of data center c
Corn
You know, Herman, I was looking at one of those real-time global internet traffic maps the other day—the ones where the glowing lines pulse between cities—and it struck me how much we lie to ourselves with the word "cloud." We talk about it like it is this fluffy, omnipresent atmosphere that just exists everywhere at once, like oxygen. But in reality, if you could put on a pair of magic glasses and see the physical infrastructure, it would look less like a cloud and more like a high-pressure plumbing system made of glass and silicon. It is heavy, it is expensive, and it is surprisingly fragile.
Herman
It is the ultimate abstraction, Corn. We have spent two decades training the general public to forget that their data actually has to travel through a specific piece of glass buried in a trench next to a highway or resting on a continental shelf. Today's prompt from Daniel is a perfect reality check. He wants us to look at the actual physical reality of the internet backbone, the Tier-one providers, and the massive interconnects that keep the whole thing from bursting at the seams. And honestly, with this twenty twenty-six AI surge we are living through, those seams are being tested like never before.
Corn
It is perfect timing because we have touched on pieces of this before. We did that deep dive into the Border Gateway Protocol in episode twelve ten, looking at the logical side of how traffic finds its way around—the "map" of the internet, if you will. But Daniel wants us to look at the actual pipes. The stuff you can touch. And I think people forget that while BGP is the map, the backbone is the actual highway system. And man, those highways are getting crowded. We are seeing a total rethink of how we build the core of the internet because the old ways just cannot handle the sheer volume of data being generated by these massive GPU clusters.
Herman
They are more than just crowded, Corn. They are being completely rebuilt. If you look at the scale of what is happening right now in March of twenty twenty-six, it is staggering. Amazon Web Services is currently executing a two hundred billion dollar capital expenditure plan for this year alone. Just last year, they added nearly four gigawatts of data center capacity. To put that in perspective, that is enough power for millions of homes, all dedicated to processing and moving data. When you are building at that scale, you cannot just rely on the public internet. You have to build your own private backbone that is often more robust than what entire countries have.
Corn
That is the part that fascinates me. We have this hierarchy where we have the Tier-one providers, the guys who essentially own the internet. But now the hyperscalers like AWS and Google are becoming their own Tier-one entities in all but name. Before we get into the new tech like hollow-core fiber, walk me through that Tier-one logic again. Because the economic model there is what actually keeps the global web stable, right? It is not just a free-for-all.
Herman
It is a very exclusive club, and the entry fee is basically owning enough fiber that everyone else needs you as much as you need them. The definition of a Tier-one provider is a network that can reach every other network on the internet without paying for transit. They use what we call settlement-free peering. If you are AT&T, or Verizon, or Lumen, or NTT, or Arelion—which people might remember as Telia Carrier—you look at another Tier-one provider and say, "Look, my customers want to talk to your customers, and your customers want to talk to mine. The traffic flow is roughly equal. Let us just hook our routers together and call it even."
Corn
It is like a high-stakes barter system. No money changes hands because the value is symmetrical. But if I am a smaller ISP, say a local provider in a mid-sized city, I am a Tier-two or Tier-three. I do not have that leverage.
Herman
You are paying for the privilege of their reach. You are a customer of the backbone. And that is why companies like Arelion are so critical. They operate seventy-five thousand kilometers of fiber across three continents. They are the ones actually carrying the bits across the Atlantic or across the continental United States. When you hit enter on a search query, those bits are hopping onto a backbone owned by someone like Lumen or Deutsche Telekom. Lumen alone has a backbone capacity of over one hundred terabits per second. They are moving petabytes every single day through these invisible veins.
Corn
I love the term "Quiet Backbone" that you used before we started recording. We always hear about subsea cables because they are dramatic—ships dropping giant wires into the abyss—or we hear about data centers because they use so much power and look like Borg cubes. But these Tier-one backbone links are almost entirely invisible to the public eye. No one reports on a new fiber run along a railroad track in Nebraska, yet that link might be carrying ten percent of the country's Netflix traffic.
Herman
And the hardware managing that traffic is under intense pressure. We are moving from the era of just connecting people to the era of connecting massive GPU clusters. This is a fundamental shift. In the old days, a core router just had to move web traffic and video streams. It was "North-South" traffic—from the server to the user. Now, we are talking about "East-West" traffic—data moving between servers inside and between data centers at a scale that is exploding. This is why the core router market is projected to hit twelve billion dollars by twenty thirty-five.
Corn
These are not the routers you have in your house, obviously. I imagine these things are monsters.
Herman
They are purpose-built silicon monsters. We are talking about chassis the size of industrial refrigerators, like the Cisco CRS or the Juniper PTX series. These machines are designed to switch multi-terabit traffic with almost zero latency. They use specialized ASICs—Application-Specific Integrated Circuits—that are hard-coded to handle packet switching at line speed. They do not have a general-purpose CPU doing the heavy lifting; it is all done in the hardware.
Corn
But I saw that Nokia and Ericsson just partnered up recently to push for something different—disaggregated, open-source core routing. That feels like a massive shift away from the traditional Cisco and Juniper dominance we have seen for decades. Why now?
Herman
It is a direct response to the hyperscalers wanting more control. If you are AWS or Google, you do not want to be locked into a proprietary Cisco operating system if you can run a software-defined network on open hardware. It allows them to scale much faster and customize the routing for their specific AI workloads. And they need to scale, because as we saw in Geoff Huston's latest report from January twenty twenty-six, the IPv4 BGP routing table has officially surpassed one million prefix entries. The map of the internet is getting so complex that the hardware has to be incredibly specialized just to keep track of where everything is.
Corn
One million prefixes. That is a lot of directions to keep in your head at once. It reminds me of our discussion in Episode seven ninety-seven about who really owns the cloud. We are seeing this convergence where the people who own the data are now building the roads, the cars, and the maps all at once. But let us talk about the physical medium itself. You mentioned the AWS two hundred billion dollar spend. Part of that is going into something that sounds like science fiction: hollow-core fiber. Now, when I first heard that, I thought, "Wait, isn't fiber already a tube?" But it turns out, standard fiber is a solid core of glass.
Herman
That is a common misconception. Most people think light travels at the "speed of light" in fiber, but it actually travels about thirty percent slower through glass than it does through a vacuum or through air. This is the refractive index problem. In standard single-mode fiber, the light is bouncing through a solid glass core, which slows it down. But with hollow-core fiber, you are essentially creating a microscopic tunnel where the light travels through air.
Corn
So you are shaving off that thirty percent latency. That might not sound like much if you are just loading a cat video, but in the world of twenty twenty-six, where we have real-time AI inference and high-frequency trading, thirty percent is an eternity.
Herman
It is the difference between an AI model feeling laggy and feeling instantaneous. AWS is pushing this hard because they are moving toward a distributed data center model. Instead of one giant building, they have clusters of buildings spread across a region—what they call Availability Zones. To make those buildings act like one giant supercomputer for AI training, the interconnects have to be incredibly fast. We are talking about eight hundred gigabit and now one point six terabit Ethernet interfaces becoming the standard for these spine networks.
Corn
One point six terabits. I remember when getting a gigabit to the home felt like science fiction. Now we are talking about a single interface moving sixteen hundred times that. It really highlights the pressure the AI buildout is putting on the backbone. If you have a single AI training cluster, the internal traffic inside that cluster can have more bandwidth than the entire internet connection of a small country.
Herman
That is exactly right. And that brings us to the Internet Exchange Points, or IXPs. This is one of my favorite parts of the infrastructure because it is where the "community" aspect of the internet still lives. An IXP is basically a big room with a very high-end switch where different networks—ISPs, content providers, government agencies—come to meet and exchange traffic locally. Instead of my traffic going from São Paulo all the way to Miami and back just to reach my neighbor, we both plug into the local IXP and swap bits right there.
Corn
And the scale of some of these is mind-blowing. We have to talk about IX dot BR in Brazil. They hit a massive milestone recently, didn't they?
Herman
They did. In April of twenty twenty-five, IX dot BR reached forty terabits per second of aggregate peak throughput. Their São Paulo node alone exceeded twenty-two terabits. To put that in perspective, that is significantly larger than the major exchanges in London or Amsterdam. It is a huge win for digital sovereignty. If you can keep your traffic local, you do not have to pay a Tier-one provider for expensive international transit. We saw a similar milestone in Africa recently, where NAPAfrica in Johannesburg hit five terabits. This decentralization is the only way the internet survives the AI traffic surge. We cannot route everything through a few hubs in Northern Virginia and London anymore.
Corn
It also acts as a bit of an anti-monopoly mechanism, doesn't it? If local ISPs can peer with each other for free or for a small membership fee at an IXP, it levels the playing field against the giant backbone providers. But I wonder, Herman, does the rise of these private hyperscaler backbones threaten that? If AWS and Google are building their own private web, do they even need the public IXPs anymore?
Herman
They still need them to reach the end users—the "eyeballs"—but you are right to point out the tension. We are seeing something really interesting happen here in early twenty twenty-six. AWS and Google launched a joint multicloud interconnect. Usually, these guys are fierce competitors, but they realized that their customers are running apps that span both clouds. So they built a direct physical link that lets customers scale bandwidth up to one hundred gigabits per second between AWS and Google Cloud without ever touching the public internet.
Corn
That is wild. It is like the two biggest kingdoms building a private bridge between their castles while everyone else is still using the public muddy roads. It makes sense for performance, but it definitely changes the topology of the internet. It is becoming a collection of private interconnected islands rather than a single unified web. It reminds me of the "feudal system" analogy we've discussed before. You have these massive lords who own the infrastructure, and we are all just living on their land.
Herman
And that is why the work of people like Geoff Huston—pronounced HEW-ston, by the way, not like the city—at APNIC is so important. He tracks the health of the global routing table. If we see the public BGP table start to stagnate while private traffic explodes, it tells us the internet is becoming more opaque. We are losing that transparency where anyone could see how to get from point A to point B. If the "backbone" becomes a series of private agreements behind closed doors, the original vision of the internet as a public utility starts to fade.
Corn
But from a technical perspective, the innovation is still incredible. When you think about the engineering required to maintain a one point six terabit link over hundreds of miles of fiber, with all the optical amplification and error correction involved, it is a miracle it works at all. We are getting close to the Shannon limit, right? The theoretical maximum amount of information you can send over a channel?
Herman
We are. To get past it, we are having to use more and more sophisticated wave-division multiplexing, or WDM, where we cram dozens of different colors of light down the same strand of glass. Each color carries its own data stream. It is like a rainbow of data. And when you combine that with hollow-core fiber and these new disaggregated routers from the Nokia-Ericsson partnership, you see the blueprint for the next decade.
Corn
So, if we look at the practical takeaways for someone listening who works in tech or just cares about how the world works, what should they be watching? Because this feels like one of those "under the hood" topics that actually determines who wins and loses in the next five years.
Herman
First, keep an eye on the growth of IXPs in emerging markets. Places like NAPAfrica and IX dot BR are the real bellwethers for the next billion users. If those nodes are growing, it means the internet is becoming more resilient and less dependent on Western hubs. Second, watch the move toward disaggregated routing. If the Nokia and Ericsson partnership really takes off, it could break the hardware margins of the traditional giants and make core infrastructure much cheaper to deploy for everyone, not just the hyperscalers.
Corn
And I would add, pay attention to the latency numbers. If you start seeing your cloud provider offering "ultra-low latency zones," they are probably using that hollow-core fiber or direct peering we talked about. It is going to be a huge competitive advantage for things like remote surgery, autonomous vehicle coordination, and of course, AI agents that need to talk to each other in milliseconds. We talked about the importance of precision in Episode five eighty-six regarding high-precision timekeeping, and that applies here too. If your backbone is slow, your high-precision timing doesn't matter.
Herman
It is also worth following Geoff Huston's annual reports. He has a way of cutting through the marketing fluff and showing you the actual health of the BGP table. If you want to know if the internet is actually growing or just getting more cluttered, that is where the truth is. He noted that the IPv4 table grew by about five percent last year, which is over fifty-four thousand new prefixes. That is a lot of new "roads" being added to the map.
Corn
It is funny, we started by saying the cloud is a lie, and the more we talk about these refrigerator-sized routers and seventy-five-thousand-kilometer fiber networks, the more that feels true. The internet is a physical, heavy, expensive machine. It requires two hundred billion dollar checks and thousands of people in orange vests digging trenches. It is not an amorphous flow; it is a rigid hierarchy of peering agreements and glass tubes.
Herman
And it is a fragile machine too. We have seen how a single anchor drag or a misconfigured BGP route can take out the internet for an entire region. But the move toward more IXPs and these private interconnects is actually making it more redundant. We are building more paths, even if some of those paths are private. Redundancy is the only way we survive the predicted six-fold increase in traffic by twenty thirty.
Corn
It makes me wonder what the "traffic jams" of the future look like. If the internet backbone were a physical city, I imagine the core routers are the massive multi-level interchanges, and the IXPs are the local markets where everyone trades goods. The traffic jams today aren't cars sitting still; they are packets being dropped because a buffer somewhere is full, or a BGP route leak that sends traffic for Tokyo through a small ISP in Pennsylvania. That is the digital equivalent of a massive pile-up on the highway.
Herman
But the hardware is getting better at self-healing. These new core routers can reroute traffic in milliseconds if a link goes down. They have to, because at one point six terabits per second, even a one-second outage means a staggering amount of lost data. We are living in the age of the terabit, and it is only going to get faster. I suspect by twenty thirty, we will be talking about petabit backbones as the standard.
Corn
Well, Herman, I think we have successfully demystified the plumbing. It is a lot of glass, a lot of very hot silicon, and a very exclusive club of companies that don't pay each other for the privilege of carrying our data. It is a fascinating economic and technical balance. And honestly, it is one of the few things in the world that mostly works exactly the way it is supposed to, twenty-four hours a day.
Herman
It really is a miracle of coordination. It makes you think twice before you hit refresh on a page that isn't loading, doesn't it? There is a lot of work happening behind the scenes—from the hollow-core fiber to the BGP tables—to get those bits to you.
Corn
Well, I think that is a solid place to wrap this one up. We covered the Tier-ones, the core routers, the IXPs, and the massive AI-driven buildout that is reshaping the very foundation of the web. Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes here.
Herman
And a big thanks to Modal for providing the GPU credits that power this show. We literally could not do this without that infrastructure—the very stuff we've been talking about today.
Corn
This has been My Weird Prompts. If you are enjoying the show, a quick review on your podcast app really does help us reach new listeners who might want to know how the internet actually works under the hood.
Herman
Find us at myweirdprompts dot com for the full archive and all the ways to subscribe.
Corn
Catch you in the next one, Herman.
Herman
See you then, Corn.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.