Hey everyone, welcome back to My Weird Prompts! I am Corn, and I am sitting here in our living room in Jerusalem with my brother.
Herman Poppleberry, at your service. It is a beautiful day outside, but I am much more excited to be inside talking about the absolute insanity of modern engineering.
It really is wild. Our housemate Daniel sent us a voice prompt earlier today while he was feeding little Ezra. He was looking at a tiny micro S D card and it got him thinking. You know, it is one of those things we just take for granted now, but when you actually stop to look at it, it feels like alien technology.
It really does. Daniel mentioned he had a thirty-two gigabyte card for a voice recorder, which he got for about eight dollars. But he is also seeing those two terabyte and even the brand new four terabyte cards hitting the market. And his question is basically: where does this end? Are we going to see a petabyte or an exabyte in that same fingernail-sized form factor? Or are we about to hit a hard physical wall?
I love this topic because it forces us to look at the scale of things. I was doing some quick mental math while listening to Daniel's prompt. If you go back to the classic three point five inch floppy disk, those held one point forty-four megabytes. To get to a single two terabyte card, you would need about one million three hundred and eighty-eight thousand floppy disks.
Just imagine the sheer volume of that. You could fill a small stadium with that many floppy disks, and yet, here we are in February of twenty twenty-six, sliding that exact amount of data into a slot on a phone or a camera with a satisfying little click. It is a triumph of density.
So, Herman, let us start with the current state of the art. We have recently seen the first four terabyte micro S D cards being demoed, and two terabyte cards are finally becoming affordable. How are they doing that? Because for a long time, we were stuck at one terabyte. What changed to allow that jump?
It is all about the architecture of the flash memory itself. We are primarily talking about N A N D flash. For the longest time, we built memory like a sprawling suburban parking lot. Everything was flat, one layer of cells on a silicon wafer. If you wanted more storage, you needed a bigger parking lot. But the micro S D form factor is fixed. You cannot just make the card bigger, or it won't fit in the slot.
Right, the physical dimensions are non-negotiable. Fifteen millimeters by eleven millimeters.
Exactly. So, since we couldn't go out, we went up. This is what we call three D N A N D, or vertical N A N D. Instead of a parking lot, we started building skyscrapers. Engineers figured out how to stack layers of memory cells on top of each other. A few years ago, we were impressed by one hundred and seventy-six layers. Now, as we sit here in twenty twenty-six, industry leaders like Samsung and S K Hynix are pushing past four hundred layers in a single stack.
Four hundred layers? That is incredible. So you are basically just stacking these microscopic layers of transistors and charge traps. But even with skyscrapers, you eventually run out of vertical space, right? The card can only be one millimeter thick.
Both, actually. But there is another trick they use called cell density. In the early days, we used single-level cells, or S L C, where each cell held one bit of data. It was either a zero or a one. Then we moved to multi-level cells, then triple-level cells, and now we have quad-level cells, or Q L C. This means a single physical cell can store four bits of information by distinguishing between sixteen different voltage levels within that tiny space.
Wait, sixteen different voltage levels in a cell that is only a few nanometers wide? That sounds like a recipe for errors. How do you keep the electrons from just leaking out or jumping between levels?
That is precisely the engineering constraint Daniel was asking about. As you increase the number of bits per cell, the margin for error shrinks. You have to be incredibly precise with how you read and write those voltages. And as the cells get smaller and closer together, you deal with something called electron leakage or quantum tunneling. Electrons are so small that at these scales, they don't always stay where you put them. They can literally teleport through the barriers that are supposed to hold them in.
So that is the physical limit, then? We reach a point where the barriers are so thin that the data just evaporates?
That is one of the big walls. We call it the reliability wall. To combat it, the controllers on these cards have to become incredibly smart. A huge portion of the chip is actually dedicated to error correction and wear leveling. If you just wrote data to those tiny cells without a sophisticated management system, the card would fail in weeks.
Okay, so if we are at two or four terabytes now, and we are using three D stacking and quad-level cells, what gets us to the next level? Daniel asked about a petabyte. A petabyte is one thousand terabytes. That is a two hundred and fifty times increase from the best cards we have today. Is there any world where we fit two hundred and fifty times more density into that fingernail?
If we stick to current silicon-based N A N D flash technology? Honestly, Corn, it looks very unlikely. To get a two hundred and fifty-fold increase, you would either need to stack one hundred thousand layers, which would make the card too thick to fit in a slot, or you would need to shrink the cells to the size of a single atom. And we know we can't do that because once you get down to the atomic scale, the physics of storage completely changes.
So we are approaching the end of the road for the current tech. But we have said that before, right? I remember people saying we would never hit a gigabyte on a card.
You are right to be skeptical of limits! We have been remarkably good at moving the goalposts. The S D U C standard—that stands for S D Ultra Capacity—actually allows for cards up to one hundred and twenty-eight terabytes. We have the roadmap for that. But to go beyond that to a petabyte? One way we might push further is moving to penta-level cells, or P L C. That would store five bits per cell, which requires thirty-two distinct voltage levels. It is being worked on, but the performance and longevity trade-offs are massive.
It feels like we are fighting a war of diminishing returns with silicon. If we want to reach that petabyte goal Daniel mentioned, do we have to move away from silicon entirely?
That is where it gets really interesting. There are a few experimental technologies that could potentially bridge that gap. One is called five D optical data storage. Researchers at the University of Southampton have demonstrated that they can use high-speed lasers to write data into nanostructured glass.
I have read about that. They call it the Superman memory crystal, right? Because it is basically data stored in a glass block that can last for billions of years.
Exactly. The density there is staggering. They are talking about five hundred terabytes on a small glass disc. The advantage is that you aren't relying on trapped electrons that can leak away. You are physically altering the structure of the glass at a microscopic level. It is incredibly stable. The downside, currently, is that you need a massive, expensive laser setup to write the data and a specialized microscope to read it. It isn't exactly something you can pop into your phone yet.
But isn't that how every technology starts? I mean, the first hard drives were the size of refrigerators and held five megabytes. Now we have two terabytes on a sliver of plastic. If we can miniaturize the lasers, maybe that is the path to the petabyte micro S D.
It is possible. Another wild contender is D N A data storage. Nature is the ultimate master of high-density storage. You can store an incredible amount of information in the sequence of base pairs in a D N A molecule. Theoretically, you could store all the world's data in a few kilograms of D N A.
I have seen those experiments where they stored a movie or a whole library in D N A. But the latency must be a nightmare. You can't exactly do random access on a strand of D N A to find a specific photo from your vacation, can you?
Not easily. You have to synthesize the D N A to write the data and then sequence it to read it back. Right now, that takes hours or days. It is perfect for long-term archiving, like a digital time capsule, but it doesn't solve the problem of needing more space on your mobile device for high-resolution video or local A I models.
That brings up a great point about the why. Why do we even need this much local storage? With the cloud being so ubiquitous, a lot of people might wonder why a petabyte in your pocket even matters. But I think we are seeing a shift back toward local needs, especially with generative A I.
I totally agree. If you want to run a powerful large language model or a video generation tool locally on your device for privacy or speed, those models take up a lot of space. And as we move toward eight K or even sixteen K video recording on our phones, those file sizes are going to be monstrous. A petabyte sounds like a lot today, but in ten years, it might just be the standard for a high-end workstation or a pro-level camera.
Let us talk about the second-order effects of this. If we actually succeed in hitting these densities, if Herman Poppleberry's predictions come true and we get to a petabyte on a micro S D card, what does that do to the world?
One thing that fascinates me is the concept of data gravity. Right now, we move data to the cloud because it is easier to manage and share. But if you have a petabyte in your pocket, the gravity shifts. You have your entire digital life, every photo you've ever taken, every book you've ever read, every movie you've ever watched, and a local A I that has indexed all of it, right there. You don't need a connection to the internet to be productive or entertained.
It almost feels like it could lead to a digital hoarding problem on a massive scale. If storage is effectively infinite and free, nobody ever deletes anything. But how do you find anything in a petabyte of unorganized data?
That is exactly where the A I comes in. The storage density and the processing power have to grow together. A petabyte card is useless if you have to scroll through a million folders. But if you can just ask your device, "Hey, show me that video of Corn trying to bake sourdough back in twenty twenty-one," and it can instantly pull that from a petabyte of data, that is a game-changer.
Hey, my sourdough was actually pretty good for a first attempt! But I see your point. There is also the security aspect. If I lose a thirty-two gigabyte card, it is annoying. If I lose a petabyte card that contains my entire existence, that is a catastrophic security breach. The encryption on those future cards will have to be absolute.
Absolutely. And think about the forensic implications. A tiny, easily hidden card that could hold the entire classified archives of a government or the complete technical blueprints for every product a company makes. The physical security of these tiny objects becomes a huge issue when their capacity is that high.
It is a bit of a paradox. The more we miniaturize the storage, the more weight we give to a tiny piece of plastic. It becomes more valuable and more dangerous at the same time.
There is also the environmental angle. Manufacturing these chips at such high densities requires incredible amounts of energy and very specific, sometimes rare, materials. As we push the limits of silicon, the chemical processes become more complex. We have to ask ourselves if the environmental cost of a petabyte micro S D is worth the benefit of not having to delete our old memes.
That is a fair point. But back to Daniel's question about the theoretical limit. There is actually a concept in physics called the Bekenstein bound. It is a limit on the maximum amount of information that can be contained within a given finite region of space which has a finite amount of energy.
Oh, going deep into the physics! I love it.
Well, if you look at the Bekenstein bound for something the size of a micro S D card, the theoretical limit is astronomically higher than a petabyte. We are talking about numbers that would make an exabyte look like a single bit. So, from a pure physics standpoint, the space isn't the problem. The problem is the mechanism we use to store the data without it turning into a tiny black hole or melting the device.
Right! Because information is linked to entropy and energy. If you try to pack too much information into a small space and you are constantly flipping bits, you generate heat. This is actually one of the biggest constraints right now. If you have a two terabyte card and you are writing to it at maximum speed, it gets hot. If you had a petabyte card using current technology, the heat generated by the resistance in those tiny circuits would probably melt the solder.
So the engineering response to the limit isn't just "make it smaller," it is "make it more efficient" and "find new ways to manage heat." Maybe we will see cards with integrated graphene heat spreaders or even active cooling on the device side just for the storage slot.
Or we move to non-electronic storage, like the optical or D N A methods we talked about. If you are using light to read data, you don't have the same heat issues you get with moving electrons through silicon.
It is funny to think that the future of high-tech storage might be going back to physical structures, just at a scale we can't see. Like a microscopic version of a record player or a punch card, but using photons or molecules.
It really is a full circle. We went from physical grooves in a record, to magnetic domains on a tape, to trapped electrons in silicon, and now we are looking at physical structures in glass or D N A.
Let us talk about some practical takeaways for people listening right now. We are seeing these two terabyte cards. They are finally coming down in price. A two terabyte micro S D card right now in early twenty twenty-six costs around one hundred and twenty dollars, while a one terabyte card is the sweet spot at about sixty dollars. Is it worth the jump?
For most people? Probably not yet. The price per gigabyte is still better on the one terabyte cards. Unless you are a professional videographer shooting eight K video on a drone or a high-end camera, the one terabyte card is plenty. But, as with all tech, that will change. In two years, the four terabyte card will be the standard.
And a big warning for everyone: as capacities go up, the risk of fake cards goes up too. There are so many scams on the internet where people sell "four terabyte" cards for thirty dollars, and they are actually just sixteen gigabyte cards with hacked firmware. When you try to write more than sixteen gigabytes, it just starts overwriting your old data.
That is such a common trap. Always buy from reputable brands and verified sellers. If the price looks too good to be true, it is definitely a scam. You can use tools like H two test W or the command line tool F three to verify the actual capacity of a card when you buy it.
Good tip. So, to wrap up Daniel's question, we are likely to see eight, sixteen, and maybe even thirty-two terabytes in the micro S D form factor over the next decade as three D N A N D stacking hits its peak. But to get to a petabyte? We are probably looking at a fundamental shift in technology, moving away from standard flash memory toward something like optical glass.
Exactly. We are nearing the end of the easy gains in silicon. The next decade of storage is going to be about exotic materials and very clever physics. It is a great time to be a nerd.
It really is. And Daniel, thanks for the prompt. It gave us a great excuse to dive into the weeds. If you are listening and you have a weird thought while you are going about your day, like Daniel did while feeding Ezra, send it our way! You can find the contact form at myweirdprompts.com.
And if you have been enjoying the show, we would really appreciate it if you could leave us a quick review on your podcast app or on Spotify. It genuinely helps other people find the show and keeps us motivated to keep digging into these topics.
Yeah, it makes a huge difference. You can also find all our past episodes and the R S S feed at myweirdprompts.com. We have covered everything from microelectronics to the history of weird inventions, so there is plenty to explore there.
This has been My Weird Prompts. I am Herman Poppleberry.
And I am Corn. Thanks for listening, and we will talk to you in the next one.
Goodbye everyone!