One hundred and ten point nine billion dollars. That is not just a rounding error, Herman. That is the kind of money that buys you a small country or, apparently, just pays for the world’s collective cloud bill in the final three months of last year. I was looking at these numbers from Omdia that just came out yesterday, March twenty-sixth, and it is staggering. A twenty-nine percent increase year over year. We are not just playing with chatbots anymore. We are witnessing the massive, heavy-duty industrialization of artificial intelligence, and the bill is finally coming due.
The transition from experimentation to full-scale production is what is driving that surge. In twenty twenty-four and twenty twenty-five, companies were essentially kicking the tires. They were running small pilots, testing the waters with a few thousand dollars here and there. But now, in March of twenty twenty-six, people are no longer just asking a model to write a poem about their cat; they are integrating these models into every single layer of their enterprise stack. Herman Poppleberry here, by the way, and I have been diving deep into why this spending is so concentrated. Today’s prompt from Daniel is about the cloud landscape in twenty twenty-six, specifically looking at the big three—AWS, Azure, and GCP—and how the choice of provider is often less about who has the best tech and more about what I like to call legacy gravity.
Legacy gravity. That sounds like a sci-fi term for being stuck in your parents' basement because you can’t afford the escape velocity of a security deposit. But I get the point. You are saying if you already have a thousand Windows servers running in a closet somewhere, or a decade of data sitting in a specific format, you are probably not going to wake up tomorrow and decide to move everything to a niche provider just because their user interface is prettier or their mascot is cuter.
It is the single biggest predictor of cloud selection. We like to think of the cloud as this meritocratic utopia where the best features win, but the reality is much more grounded in inertia. If you are an organization deep in the Microsoft ecosystem, using Windows Server, SQL Server, and Active Directory, the gravity pulling you toward Azure is almost inescapable. They have this thing called the Azure Hybrid Benefit. If you already own those licenses for your on-premises hardware, you can bring them to the cloud and reduce your virtual machine costs by up to eighty percent. When a chief financial officer sees an eighty percent discount on the largest line item in the budget, the technical merits of the other clouds almost become secondary. It doesn't matter if the developer experience is a bit clunky if you are saving millions of dollars on licensing fees.
Eighty percent is a massive wedge. It is hard to argue for a more elegant developer experience when the alternative is paying five times as much for the same compute. But that is the business side, the boardroom side. What about the teams who actually have to build the stuff? Because Daniel’s prompt mentioned he finds AWS overwhelming, which, let’s be honest, is the understatement of the century. Logging into the AWS console feels like walking into a massive warehouse where all the labels are written in a language that only exists in Jeff Bezos’s dreams. You just wanted to host a simple website, and suddenly you are looking at a list of three hundred services with names like Kinesis, Fargate, and Braket.
AWS is the builder’s cloud, but that comes with a very specific, very rigid philosophy. They prioritize what they call primitives. Instead of giving you a finished, polished tool, they give you two hundred and fifty plus granular building blocks. It is incredibly flexible, but it creates this "service soup." If you want to do something simple, you often have to touch four different services and navigate the labyrinth that is IAM, or Identity and Access Management. IAM is the perfect example of the AWS philosophy. It is powerful enough to secure a nuclear silo, but it is so complex that most developers end up just giving everything "admin" access out of pure frustration, which defeats the entire purpose.
I have seen that labyrinth. You want to give a developer permission to read a single file in a storage bucket, and three hours later you are knee-deep in JSON policies, trying to figure out why a bucket policy is overriding a user permission which is being blocked by a service control policy. It feels like AWS was built by engineers for engineers who never want to leave their terminal and who find joy in the struggle. It is not exactly welcoming to the uninitiated. It’s like being handed a box of a million Lego pieces but no instructions, and half the pieces are microscopic.
The complexity is a side effect of their first-mover advantage. They were building these services before anyone knew what the standard patterns should be. They were inventing the cloud as they went along. GCP, or Google Cloud Platform, had the benefit of coming later and being much more opinionated. Their philosophy is entirely different. They take the internal innovations Google developed for their own massive scale—things like Kubernetes and BigQuery—and they package them for the rest of us. It is an engineering-first cloud, but with a much higher level of abstraction and automation. They don't give you the raw primitives as often; they give you the "Google way" of doing things.
And that is why developers like Daniel love it. The project-centric hierarchy in GCP is so much more intuitive. In AWS, everything is just floating in your account unless you use tags or separate accounts. In GCP, you have a project, you have your resources in that project, and when you are done, you delete the project and everything goes away. It is clean. It feels like a modern operating system rather than a collection of hardware parts. It’s the difference between a custom-built PC where you have to worry about the voltage of your RAM and a high-end laptop where everything just works because someone else made the hard choices for you.
The developer experience versus the enterprise experience is the great divide of twenty twenty-six. GCP appeals to the DevOps-heavy teams who want a Kubernetes-native environment. Google Kubernetes Engine, or GKE, is still widely considered the gold standard for managed containers because Google literally invented the underlying technology. But then you look at the market share. AWS is still sitting at thirty-two percent, Azure is at twenty-five, and GCP is back at fourteen. If the tech is so much cleaner on the Google side, why the gap? It comes back to that legacy gravity and the "Enterprise Experience." Azure isn't trying to win on the beauty of its CLI. It’s winning because it integrates with the tools the IT department already uses. If you use Microsoft Teams and Office three sixty-five, Azure feels like just another tab in your existing workflow.
Because business is not a coding competition, Herman. It is a relationship competition. Microsoft has been in the boardroom for forty years. They are the default. And AWS is the infrastructure default. There is a saying that nobody ever got fired for buying IBM, and for the last decade, that has shifted to nobody ever got fired for choosing AWS. It is the safe, boring choice that everyone knows how to use, or at least everyone can find a consultant who knows how to use it. If you choose GCP and something goes wrong, your boss might ask why you didn't just use the industry leader. If you choose AWS and it goes down, you just point at the news and say, "Hey, half the internet is down, it’s not my fault."
There is also the support gap to consider, which Daniel touched on in his prompt. This is where the reality of the cloud hits the small and medium-sized businesses. If you are a massive enterprise spending ten million dollars a month, you have a dedicated account manager and a team of solutions architects on speed dial. You are royalty. But if you are a smaller shop, you are stuck in what I call the support vacuum. You are paying for the same underlying hardware, but you are essentially on your own when the lights go out.
Right, because basic billing support is free, but the moment you need someone to tell you why your database is screaming at three in the morning, the meter starts running. AWS Business Support starts at a hundred dollars a month, but that is just the floor. It is actually a percentage of your usage, usually between three and ten percent. For an SMB, that is a lot of money for a service that doesn't even guarantee a resolution time. You are paying for the privilege of opening a ticket that might not be answered for twelve hours.
The March twenty twenty-six reports on cloud satisfaction show that lower-tier users are increasingly feeling abandoned. You can pay that hundred dollars, but you are still just a ticket in a queue. There is no human relationship. During a major regional outage, those users are essentially flying blind. They are paying for the cloud to avoid the headache of managing hardware, but they end up managing the anxiety of not knowing when their provider will fix a problem. It’s a trade-off that is becoming harder to justify as prices rise.
It is a weird paradox. We moved to the cloud for the promise of infinite scalability and reliability, but we traded our control for a black box. And now, in twenty twenty-six, that black box is getting more expensive because of the hardware super-cycle. I was reading that server DRAM prices have surged by ninety-five percent just in the last few months. That has to be hitting the providers hard, and you know they aren't going to just eat those costs.
It is causing massive sticker shock. These cloud giants have to refresh their hardware constantly to keep up with AI demands. When the price of memory nearly doubles, they can’t just absorb that cost forever. We are seeing it reflected in the newer instance types and the aggressive push toward long-term commitments. And it is not just the components; it is the physical space. The CBRE report from two days ago, March twenty-fifth, noted that North American data center vacancy rates have hit a historic low of one point four percent.
One point four percent? That is basically zero. That means there is nowhere left to put the racks. If you want to expand your footprint in Northern Virginia or Silicon Valley, you are basically waiting for someone else to go out of business.
The power grid is the real bottleneck. You can build a shell of a building in six months, but getting the local utility to drop a hundred megawatts of power to that site can take years. This scarcity is driving up the price of colocation and cloud compute alike. It is forcing a lot of companies to rethink their architecture. We are moving away from the era of "just throw it in the cloud and let it scale" toward a much more disciplined, resource-aware approach to engineering. People are actually having to optimize their code again because the "infinite" resources of the cloud are starting to feel very finite.
It makes me wonder if we are going to see a move back to the edge or even private data centers for certain workloads. If the cloud is full and the prices are spiking, maybe that old server closet doesn't look so bad after all. But before we get ahead of ourselves, we should talk about the other players, because it is not just a three-horse race. Daniel mentioned IBM and Alibaba Cloud, and they are carving out very specific, very lucrative niches that the big three often overlook.
IBM is a fascinating case of reinventing yourself for the third or fourth time. They are not trying to compete with AWS on being a general-purpose utility for every startup in a garage. They have pivoted hard toward highly regulated industries like finance and healthcare. On March sixteenth, they announced an expanded partnership with NVIDIA to bring Blackwell Ultra GPUs to their cloud. They are positioning themselves as the go-to for large-scale AI reasoning in environments where security and compliance are the only things that matter. If you are a global bank, you don't care about a pretty UI as much as you care about a provider who understands the regulatory nightmare you live in and can guarantee where your data is at every second.
It is a smart play. They are leaning into their legacy rather than running from it. And then there is Alibaba Cloud. They are a powerhouse in the Asia-Pacific region, with thirty-six percent growth reported this month. But they are facing a completely different set of challenges because of trade restrictions and geopolitical tensions. They can't just buy the latest chips from the West whenever they want.
The export restrictions on high-end chips have forced Alibaba to become incredibly self-reliant. Just two days ago, on March twenty-fifth, they debuted their proprietary five-nanometer C950 AI chip. This is their answer to not being able to buy the top-tier hardware from NVIDIA or AMD. They are building their own silicon, their own networking stacks, and their own software layers to ensure they aren't vulnerable to geopolitical shifts. It is a level of vertical integration that even the big three in the United States are still striving for. It turns the cloud into a sovereign asset.
It is impressive that they can pull that off while under such heavy restrictions. It shows that the cloud isn't just a technical layer anymore; it is a matter of national security. Every major power wants to make sure their digital infrastructure doesn't depend on someone else’s permission. But coming back to the individual developer, like Daniel or anyone listening who is trying to build something today, how do you navigate this? If you are starting a project in twenty twenty-six, do you just follow the legacy gravity, or do you fight it?
You have to audit your legacy gravity before you sign the contract. A lot of teams default to a provider because of a five-thousand-dollar credit or a familiar interface, only to realize two years later that they are locked into an ecosystem that doesn't fit their long-term needs. You need to look at three things: your data, your team, and your exit strategy. If you are building something data-heavy or AI-centric, the egress fees—the cost of moving data out of the cloud—can trap you. You need to look at where your data lives and where it needs to go.
And don't ignore the support vacuum. If you are an SMB, you might be better off looking at a managed service provider that sits on top of the big clouds. They can give you that human touch and guaranteed resolution time that you aren't going to get from a trillion-dollar company’s automated ticketing system. You are basically paying a premium for a human shield. Someone who will stay on the phone with the cloud provider so you don't have to.
It is also worth monitoring hardware availability as a proxy for cost. If you see vacancy rates hitting one percent, you know that reserved instances and long-term commitments are going to become more valuable. The days of spot instances being dirt cheap and always available are fading as the demand for compute outstrips the ability of the power grid to support it. You have to be an economist as much as an engineer these-days.
It is wild to think that the limiting factor for the most advanced AI in the world is basically a transformer on a telephone pole that hasn't been upgraded since the nineteen seventies. We are building the future on top of a very old, very tired physical backbone. We actually talked about this in episode twelve forty-two, looking at the physical constraints of the internet, and it is only getting more relevant as we push the limits of what these data centers can handle.
The physical reality always wins in the end. You can have the most elegant code in the world, but if there is no power to run the chip, the code doesn't exist. That is why Amazon is spending two hundred billion dollars on infrastructure. They are trying to buy their way out of the bottleneck by literally building their own power solutions. We covered their specific strategy in episode sixteen zero eight, and it is clear they are playing a different game than everyone else. They aren't just a cloud company; they are becoming a utility company.
They are playing the "I own the power plant" game. It is a bold move, and honestly, it might be the only move left. But for the rest of us, the takeaway is clear: the cloud is not a commodity. It is a set of distinct philosophies with very real physical and economic constraints. Whether you prefer the builder’s chaos of AWS, the enterprise comfort of Azure, or the developer-first opinion of GCP, you are making a choice that will dictate your engineering culture for years to come.
And your budget. Do not forget the budget. When DRAM prices go up ninety-five percent, everyone feels it. You have to be more efficient. You have to be more intentional. The era of "wasteful cloud" is over.
I feel it every time I look at my credit card statement, Herman. But that is a conversation for another day. I think we have given people plenty to chew on regarding the state of the cloud in twenty twenty-six. It is a complex, expensive, and fascinating world.
It is. I am curious to see if the power grid constraints force a move toward more sovereign or edge-based clouds by this time next year. There is only so much density you can put in Northern Virginia before the whole place starts to glow.
Well, until we start building data centers on the moon, we are stuck with what we have on the ground. Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes and making sure our own little cloud of a podcast stays online.
And a big thanks to Modal for providing the GPU credits that power this show. They are doing some great work in making high-performance compute more accessible, which is exactly the kind of alternative we have been talking about today.
This has been My Weird Prompts. If you are finding these deep dives helpful, search for My Weird Prompts on Telegram to get notified the second a new episode drops. It is the best way to stay ahead of the curve and avoid the support vacuum.
Stay curious, and keep building. Even if you have to do it in a service soup.
Just maybe build it on GCP so Daniel doesn't get a headache looking at your console. See ya.
Goodbye.
One hundred and ten billion dollars. I still can’t get over that. I could buy a lot of leaves for that much money.
You could buy all the leaves, Corn. Every single one of them. You could have a global leaf monopoly.
Now that is a cloud strategy I can get behind. Vertical integration of the forest floor.
We are done, Corn. The transmission is ending.
Right. Moving to the edge now.
Literally.
Literally.
Goodbye everyone.
Bye.
Stop talking, Corn.
You first.
I already did.
Okay, fine. Now.
Now.
Seriously though, eighty percent discount? That’s just unfair to the competition.
It is the power of the ecosystem.
Wild.
It is.
Okay, now I am actually going.
Good.
Bye.
Bye.
Wait, one more thing... do you think the C950 chip is actually faster than the Blackwell?
That is a debate for episode seventeen hundred.
Fine.
Truly, goodbye.
Goodbye.
The end.
Fin.
Stop.
Done.
End of transmission.
Over and out.
Corn.
Herman.
Please.
Okay, okay. We are out.
Finally.
See ya.
See ya.
One point four percent vacancy. It’s like a crowded elevator in there.
Corn!
Sorry! Going now!
Goodbye.
Goodbye.
Are you still there?
No.
Good.
Me neither.
This is the longest sign-off in history. We are wasting compute.
We are setting a record for the most expensive silence in podcasting.
Let's not.
Okay. Bye.
Bye.
For real this time.
I hope so.
Me too.
Okay.
Okay.
I'm hanging up.
I'm already gone.
Clearly.
Bye.
Bye.
Okay, I'm actually stopping now.
Good.
Love you, bro.
Love you too. Bye.
Bye.