#1061: Living Computers: When Brain Cells Play Pong

Can a petri dish play Pong? Discover how "wetware" is using living brain cells to redefine the future of energy-efficient computing.

0:000:00
Episode Details
Published
Duration
28:39
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The frontier of modern computing is no longer just about shrinking transistors or increasing clock speeds on silicon chips. A new field, known as "wetware," is emerging that utilizes actual biological neurons as hardware. This shift represents a fundamental change in how we approach intelligence, moving away from digital simulations of the brain and toward using the brain's own cells to perform computational tasks.

The Mechanics of Biological Interfaces

To bridge the gap between digital systems and biological cells, researchers use a microelectrode array (MEA). This device consists of a plate covered in thousands of microscopic gold-plated electrodes where neurons are grown directly onto the surface. This setup creates a two-way translation layer: the computer can stimulate the neurons with electrical pulses—mimicking sensory input—and record the electrical spikes the neurons produce in response. This allows a cluster of cells to interact with a digital environment, such as a game of Pong, in real-time.

The Drive for Order

One of the most profound aspects of wetware is how these cells are "programmed." Unlike traditional AI, which relies on mathematical reward functions and massive datasets, biological systems are driven by the Free Energy Principle. This theory suggests that all biological systems instinctively seek to minimize surprise and uncertainty.

In a controlled environment, neurons are given structured, predictable feedback when they perform a task correctly. When they fail, they are met with chaotic, random noise. To avoid this "unpleasant" entropy, the neurons physically reorganize themselves, growing new synapses and strengthening connections to achieve a state of predictable equilibrium. They are not following code; they are self-organizing to maintain order.

Efficiency and Learning Speed

The advantages of biological computing are most evident in energy consumption and learning velocity. While a modern AI training cluster can consume megawatts of power, the human brain operates on roughly 20 watts. Furthermore, biological neurons can learn new tasks, like navigating a game, in a fraction of the time required by silicon-based reinforcement learning. Because the hardware itself is fluid and capable of rewiring in real-time, it finds patterns far more efficiently than the "guess and check" method of digital algorithms.

The Maintenance Trade-off

Despite the staggering efficiency, the transition to wetware faces significant hurdles. Silicon is durable and low-maintenance, whereas biological tissue is incredibly fragile. These living computers require a constant supply of nutrients, precise temperature control at 37 degrees Celsius, and protection from infection. Shifting to biological hardware means trading a high electricity bill for a high-intensity life support system.

We are currently in the "vacuum tube" era of wetware. While we may not see biological brains in our smartphones anytime soon, the potential for low-power, insect-level intelligence in specialized sensors and autonomous agents is a looming reality. The mystery of the "black box" remains, but as we move toward biological integration, the line between the machine and the organism continues to blur.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1061: Living Computers: When Brain Cells Play Pong

Daniel Daniel's Prompt
Daniel
Custom topic: Explore the cutting-edge field of biological computing and the rise of wetware. Focus on Cortical Labs and their work growing lab-cultured neurons on microelectrode arrays that can perform computation
Corn
Hey Herman, did you see that clip our housemate Daniel sent over this morning? It was a short video of what looked like a petri dish, but it was connected to a computer screen showing a game of Pong. At first, I thought it was just some sort of digital simulation of cellular growth, but then I realized the cells were actually the ones moving the paddle. It was one of the most unsettling and fascinating things I have seen in a long time.
Herman
Herman Poppleberry here, and yeah, I saw it. That is the work of Cortical Labs, an Australian start up that is basically rewriting the rules of what we consider a computer. It is absolutely fascinating stuff, Corn. We have spent so much time on this show talking about how we are trying to make silicon chips act more like human brains, but what Daniel sent us is the inverse. They are literally using biological brain cells as the hardware. They call it wetware, and it is a complete shift in how we think about computation. It is not just a simulation of a brain; it is a piece of a brain doing the work.
Corn
Wetware. It sounds like something out of a science fiction novel from the nineteen eighties, like something William Gibson would have come up with. But it is real, right? I mean, these are living neurons, cultured in a lab, actually performing logic tasks. It makes you wonder why we spent the last seventy years perfecting the transistor if the ultimate computer was already sitting inside our skulls the whole time. Are we just admitting that nature did it better?
Herman
In many ways, yes. We have gotten incredibly good at making silicon do what we want, but we are starting to hit some very real physical walls. If you remember back in episode five hundred sixty three, when we talked about the billion transistor city and how chips are made, we discussed the extreme difficulty of shrinking transistors any further. We are fighting against heat, electron leakage, and the massive energy costs of running these giant data centers. We are basically trying to brute force intelligence with electricity and cooling fans. Biology, on the other hand, solved those problems billions of years ago. Your brain runs on about twenty watts of power. That is less than a dim light bulb. Meanwhile, a high end artificial intelligence training cluster in twenty twenty six can consume megawatts. We are talking about the power consumption of a small city just to train a model that still struggles with basic common sense.
Corn
That energy gap is staggering. Twenty watts versus millions of watts. It is like comparing a bicycle to a space shuttle in terms of fuel consumption, but the bicycle is somehow faster at certain types of thinking. So, the prompt Daniel sent us today is really asking us to look at this bridge between biology and technology. We are looking at Cortical Labs and their DishBrain project. I want to really dig into how this works on a mechanical level. How do you actually talk to a neuron? It is not like you can just plug a USB cable into a cell membrane. How do you get a digital signal into a biological cell?
Herman
No, you definitely cannot just plug it in. The interface is something called a microelectrode array, or an MEA. Imagine a small glass or silicon plate covered in thousands of tiny, gold plated electrodes. These electrodes are incredibly small, spaced just micrometers apart. They grow the neurons directly on top of this array. These are usually mouse neurons or even human induced pluripotent stem cells that have been coaxed into becoming neurons. As they grow, they naturally start to form connections with each other, creating a literal neural network. Because they are sitting right on those electrodes, the computer can do two things. It can stimulate the neurons by sending a tiny electrical pulse, which mimics the way neurons naturally communicate, and it can record the electrical spikes that the neurons produce when they fire. It is essentially a translation layer between the language of electrons in a wire and the language of ions moving across a cell membrane.
Corn
Okay, so it is a two way street. The computer sends a signal in, which the neurons perceive as some kind of sensory input, and the neurons send a signal back out, which the computer interprets as data or a command. But how do the neurons know what they are doing? In the Pong example, how does a cluster of cells in a dish understand that it needs to move a digital paddle to hit a digital ball? There is no central nervous system there, no eyes, no muscles, no reward center like a stomach that wants food. It is just a flat layer of cells. How do you motivate a petri dish?
Herman
This is where it gets really deep, and it involves a concept called the Free Energy Principle. This was popularized by Karl Friston, who is actually the chief scientist at Cortical Labs and one of the most cited neuroscientists in history. The idea is that all biological systems, from a single cell to a human being, are driven by a fundamental need to minimize surprise or uncertainty in their environment. Biology hates chaos. It wants a predictable world because predictability equals survival. If you can predict where the food is or where the predator is, you live. If you are constantly surprised, you die.
Corn
So, the neurons are basically trying to make their world make sense? They are seeking a state of equilibrium?
Herman
When Cortical Labs sets up the Pong game, they translate the position of the ball into electrical pulses sent to specific areas of the neuron cluster. If the paddle hits the ball, the neurons get a nice, predictable, structured feedback signal. It is like a steady rhythm or a familiar song. But if the paddle misses the ball, the system sends back a burst of random, chaotic noise. The neurons find that noise extremely unpleasant, in a biological sense. It represents high entropy or high surprise. It disrupts their internal state. So, the neurons actually reorganize their own internal connections, they physically grow new synapses and strengthen others, to figure out how to move the paddle so they can get back to that predictable, structured signal. They are not being programmed with code. They are self organizing to minimize the chaos of their environment. It is reinforcement learning, but instead of a mathematical reward function, it is a biological drive for order.
Corn
That is wild. It is almost like the neurons have a tiny, primitive form of agency. They are making a choice, or at least following a biological drive, to change their behavior based on the feedback. And they did this faster than traditional artificial intelligence, right? I read somewhere that the DishBrain learned to play Pong in about five minutes, whereas a standard reinforcement learning algorithm might take closer to ninety minutes to get the hang of it. Why is there such a massive difference in learning speed?
Herman
That is correct. Silicon based artificial intelligence is great at crunching massive amounts of data, but it is actually quite slow at learning new concepts from scratch compared to biology. This is because biology is inherently designed for real time adaptation. A digital artificial intelligence has to run through thousands or millions of iterations of a simulation to figure out the rules because it is essentially guessing and checking. These neurons, because they are physically changing their structure as they learn, are much more efficient at finding the pattern. They are not just simulating a network; they are the network. When a neuron in the dish fires and it leads to a predictable result, that connection is physically reinforced. It is hardware that rewires itself in real time. In a silicon chip, the wiring is fixed at the factory. You can change the software, but the physical pathways are set in stone. In wetware, the software and the hardware are the same thing.
Corn
I want to go back to the energy efficiency point you made earlier, because I think that is the strongest argument for why this isn't just a cool science experiment. We are living in an era where the demand for computation is exploding. Every company wants their own large language model, every device wants to be an AI agent. In episode four hundred seventy seven, we talked about the rise of mobile AI agents and how hard it is to run these models on a battery. If we could use wetware for some of these tasks, are we looking at a future where our devices have little biological pods in them to save power? Is my next phone going to have a brain in it?
Herman
It is a compelling vision, but we have to be realistic about the trade offs. Silicon is incredibly durable. You can leave your phone in a hot car, you can drop it, you can turn it off for a month and turn it back on, and the chips will still work. Biological tissue is the opposite. It is incredibly fragile. Those neurons in the Cortical Labs dish have to be kept in a very specific environment. They need a constant supply of nutrient rich fluid, which is basically their version of electricity. They need to be kept at exactly thirty seven degrees Celsius. They are susceptible to infections, and they eventually die. So, while the computation itself is energy efficient, the life support system required to keep the hardware alive is actually quite intensive. You are trading a high electricity bill for a high maintenance bill.
Corn
Right, so you are trading a power cord for a feeding tube. You have to feed your computer and keep it from getting sick. It reminds me of the old days of computing where you had these massive rooms full of vacuum tubes that were constantly burning out, except now the vacuum tubes are living cells that need glucose and oxygen. It seems like the logistics of scaling this up to a consumer level would be a nightmare. Can you imagine the tech support calls? My computer has a fungal infection.
Herman
That is a great analogy. We are essentially at the vacuum tube stage of wetware. But the potential is there for specific applications. Think about the complexity. The DishBrain experiment used about eight hundred thousand to one million neurons. To put that in perspective, that is roughly the number of neurons in the brain of a honeybee or a cockroach. Now, a cockroach is not going to write a symphony or solve complex calculus, but it can navigate a complex three dimensional environment, find food, avoid predators, and reproduce, all while consuming almost zero power. If we can harness that level of processing power for specific tasks like pattern recognition, sensory processing, or real time navigation, it would be a game changer. We are not trying to build a human brain in a box yet; we are trying to build the computational equivalent of an insect, which is still vastly more efficient than our best robots.
Corn
It also makes me think about the black box problem we discussed in episode nine hundred seventy four. With digital AI, as the models get bigger, we have a harder and harder time understanding why they make certain decisions. We call it emergent logic. We can see the weights and the biases, but we don't really know the why. With biological wetware, it is the ultimate black box. We are literally using a brain, and we still do not fully understand how the human brain works. Are we just replacing one mystery with another? Are we giving up on explainability in exchange for efficiency?
Herman
In some ways, yes. But the difference is that biology has a different kind of transparency. We can observe the physical connections forming. We can see the chemical signals. It is a different language of computation. Digital AI is based on binary logic, zeros and ones. Biology is based on spikes, timing, and chemical gradients. It is much more fluid. And that fluidity is what allows for what we call biological intelligence, the ability to generalize from very few examples. If you show a child a picture of a cat once, they can usually identify every other cat they see for the rest of their lives, regardless of the angle or the lighting. A silicon AI might need ten thousand pictures of cats to reach the same level of accuracy because it is looking for statistical correlations in pixels, not the essence of a cat. Wetware might give us a way to build systems that understand the world more like we do.
Corn
That brings up a really interesting point about the future of hardware. If we are moving toward these hybrid systems, what does that mean for the people building the chips? Are we going to see a merger of biotech companies and semiconductor companies? Because it seems like the next major bottleneck isn't going to be how many transistors you can fit on a chip, but how well you can interface biology with silicon. Are we going to see Intel buying up stem cell labs?
Herman
You are hitting on the exact direction the industry is heading. We are seeing the emergence of biocompatible interfaces. It is not just about gold electrodes anymore. Researchers are looking at conductive polymers and carbon nanotubes that can wrap around individual neurons without damaging them. The goal is to create a seamless high bandwidth connection between the digital world and the biological world. If you can do that, you open up possibilities that go far beyond just playing Pong. You could use these neural arrays for drug discovery, for example. Instead of testing a new neurological drug on a lab rat or in a computer simulation that might not be accurate, you could test it directly on a dish of human neurons and see exactly how it affects their firing patterns in real time. It would revolutionize how we treat diseases like Alzheimers or epilepsy.
Corn
That is a much more practical application than I was imagining. It is a way to bridge the gap between a petri dish and a clinical trial. But let's talk about the elephant in the room here. The ethics. When I saw that video Daniel sent, my first thought was, does that cluster of cells feel anything? If it can learn, if it can respond to its environment, if it can feel surprise or frustration when it misses the ball, at what point does it become a sentient entity? We are using human cells sometimes. Does that change the moral equation?
Herman
That is the million dollar question, Corn, and it is one that the scientific community is actively debating right now. Cortical Labs is very careful to use the term synthetic biological intelligence rather than consciousness or sentience. They argue that these are just stimulus response systems. They are clusters of cells following basic biological imperatives. They do not have a body, they do not have a limbic system to process emotions, and they do not have a prefrontal cortex for high level thought. They are essentially a brain without a person. But, as we scale these systems up, the line gets blurrier. If we get to ten million neurons, or a hundred million, we are moving into the territory of small mammals.
Corn
Right, because if one million neurons is a cockroach, what happens when we get to one hundred million? That is a small mammal. What happens when we get to eighty six billion, which is the number of neurons in a human brain? At what point do we owe these chips moral consideration? If you turn off the power to a silicon chip, you are just stopping a flow of electrons. You can turn it back on later. If you stop the nutrient flow to a wetware chip, you are killing a living thing. It is gone forever. Does a chip have a right to life? It sounds like a joke, but it feels like we are stumbling into a moral minefield.
Herman
It is a serious legal and ethical frontier. There is a concept called the sentience threshold. Some philosophers argue that any system capable of experiencing valenced states, meaning it can feel that something is good or bad for its own survival, has at least a baseline level of moral status. If these neurons are actively avoiding random noise because they find it unpleasant, are they experiencing a very primitive form of suffering? And if they are, is it ethical to trap them in a dish and force them to play video games or process financial data for our benefit? We are essentially creating life to be a tool.
Corn
It is a bizarre thought. We have been worried about AI becoming sentient and taking over the world, but maybe the real concern is that we are going to create a new class of biological slaves that we do not even recognize as being alive. And from a conservative perspective, there is a real question about the sanctity of life here. If we are using human neurons derived from stem cells, are we devaluing what it means to be human by turning our very essence into a piece of industrial hardware? Are we treating the building blocks of humanity as just another commodity like copper or silicon?
Herman
That is a very valid concern. There is a risk of reducing human biology to a mere component. On the other hand, from a pro innovation standpoint, the potential to cure diseases is immense. If we can understand how neurons learn and how they break down, we could save millions of lives. It is that classic tension between the incredible promise of technology and the need to maintain our moral and ethical boundaries. We have to ask ourselves where we draw the line between a tool and a being. Is it the number of neurons? Is it the presence of certain structures? We don't have an answer yet.
Corn
And there is also the geopolitical angle. If this technology really does provide a massive leap in energy efficiency and learning speed, it becomes a strategic asset. You can imagine a world where the next arms race isn't just about who has the fastest supercomputer or the most GPUs, but who has the most advanced wetware labs. If you are a country like the United States or Israel, you want to be at the forefront of this, because the implications for national security, for processing massive amounts of sensor data in real time on a drone with a tiny battery, are huge.
Herman
Imagine a drone that doesn't need a massive power hungry processor to navigate through a forest, but instead has a small biological core that can handle spatial awareness with the efficiency of a bird. That is a massive tactical advantage. But it also creates a logistical nightmare. How do you maintain a biological fleet? How do you keep your hardware from catching a virus, a literal biological virus? The security protocols would have to be entirely different. You are not just worried about hackers; you are worried about contamination and genetic drift. You might need a veterinarian on your IT team.
Corn
It is almost like we are going back to a more organic way of living, but in the most high tech way possible. It is a strange synthesis. We spent the twentieth century trying to conquer nature with steel and silicon, and now in the twenty first century, we are realizing that nature is actually the better engineer. We are trying to fold biology back into our machines because our machines are too hungry and too rigid.
Herman
It is a humbling realization. We are essentially admitting that after decades of Moore's Law and billions of dollars in research, we still cannot match the efficiency of a single gram of brain tissue. But that is also what makes it so exciting. We are moving away from the rigid, deterministic world of traditional computing and into something much more dynamic. We are learning to collaborate with biology rather than just trying to simulate it.
Corn
I wonder about the scalability of this. You mentioned that keeping these cells alive is the hard part. Do you think we will ever see a point where we have standardized bio boxes that you can just plug into a server rack? Like, a standardized life support system that handles the feeding and the temperature control, so the user only sees the computational output? A literal black box that you feed sugar water into?
Herman
That is exactly what companies like Cortical Labs are working on. They are trying to create a plug and play interface for wetware. They want to turn the biological component into a module. You would have your nutrient reservoir, your waste management, and your thermal control all integrated into a single unit. From the outside, it might look like a regular server blade, but inside, it is a living ecosystem. The challenge is the reliability. Biology is messy. Neurons die, they grow in unexpected ways, they can get exhausted. In their experiments, the neurons actually performed better when they were given breaks, just like humans. You cannot run them at one hundred percent capacity twenty four hours a day without them degrading.
Corn
Wait, so the computer needs to take a nap? That is hilarious. I can see the IT tickets now. My computer is sluggish, did you try giving it a twenty minute power nap and some sugar water? It adds a whole new layer to the concept of downtime.
Herman
It sounds funny, but it is true. There is a metabolic cost to computation in biology. When neurons fire, they use up ATP, which is the cellular energy currency. They also produce metabolic waste products that need to be cleared away. If the waste builds up, the neurons slow down and their performance drops. This is why we sleep, to let our brains clear out those toxins. So, a wetware computer would literally need a sleep cycle to function at peak performance. It is a completely different paradigm from silicon, which doesn't care if it has been running for a year straight as long as it doesn't overheat.
Corn
This really changes the concept of uptime. We are used to five nines of reliability, ninety nine point nine nine nine percent. You are not going to get that with a biological system. You are going to have to design your software to be resilient to the hardware being occasionally offline or acting a bit moody. It is a much more organic, forgiving style of engineering. It is less like building a bridge and more like tending a garden.
Herman
It actually reminds me of what we discussed in episode six hundred thirty three about the memory wars and the future of local agentic AI. As we move toward these agents that are constantly running in the background, we need hardware that can handle long term, low power tasks. Wetware might be the perfect fit for that. It doesn't need to be the fastest at math, it just needs to be the best at staying aware and adapting to small changes in the environment over long periods without draining the battery.
Corn
So, if I am a listener and I am hearing all this, what is the takeaway? Is this something that is going to affect my life in the next five years, or is this twenty years away? Should I be looking for bio chips in my next laptop?
Herman
In the next five years, you will probably see this primarily in the pharmaceutical and medical research space. The ability to create mini brains or organoids on a chip to test drugs is already happening and it is going to accelerate. It is going to make drug development faster and cheaper. As for having a biological processor in your laptop? We are probably looking at a ten to twenty year horizon for that, if it happens at all. The logistics of keeping cells alive in a consumer device are just too difficult right now. But keep an eye on the high end data centers and niche industrial applications. That is where the early adoption will happen.
Corn
It is also worth watching the companies that are building the interface layers. The gold rush isn't just in the neurons themselves, but in the sensors and the software that can translate between biological spikes and digital code. That is where the real intellectual property is being built right now. The bridge is just as important as the destination.
Herman
And for the tech enthusiasts out there, I would say start thinking about biological intelligence as its own category. We have had human intelligence, then we had artificial intelligence, and now we are seeing the rise of this third category, synthetic biological intelligence. It has its own strengths and its own very weird weaknesses. It is not going to replace silicon, but it is going to complement it in ways that we are only just beginning to understand. It is a hybrid future.
Corn
It is a bit of a reality check for our ego as creators, isn't it? We thought we were building something entirely new with AI, but it turns out we are just trying to find a different way to package what nature already perfected. It is like we are coming full circle, from using our brains to build computers, to using computers to grow brains.
Herman
It really is. And I think that is a good place to start wrapping this up. This field is moving incredibly fast. If you had told me five years ago that we would have a dish of cells playing a video game and learning the rules in minutes, I would have been skeptical. But here we are in March of twenty twenty six, and it is a reality. The line between the born and the made is getting thinner every day.
Corn
Well, if my computer ever starts asking for a snack or a nap, I will know who to blame. This has been a fascinating deep dive. I want to thank Daniel for sending this in. It is one of those topics that really makes you look at the world, and your own brain, a little bit differently. It makes you realize how much work your own twenty watt processor is doing right now just to listen to this.
Herman
It really does. If you guys are enjoying these deep dives into the weird frontiers of technology, we would really appreciate it if you could leave us a review on your podcast app or on Spotify. It helps other curious minds find the show and helps us keep doing this. We love exploring these rabbit holes with you.
Corn
Yeah, it genuinely makes a big difference. And if you want to find more episodes or see the full archive, head over to myweirdprompts dot com. You can find the RSS feed there and all the different ways to subscribe. We have got over a thousand episodes in the vault now, covering everything from chip manufacturing to the ethics of AI logic.
Herman
And if you are on Telegram, you can search for My Weird Prompts to join our channel. We post every time a new episode drops, so you will never miss a prompt.
Corn
Alright, Herman, I think I need to go give my own biological processor a break. All this talk of neurons and free energy is making me realize I might need a nap myself to clear out some metabolic waste.
Herman
Fair enough. I will go check on the nutrient levels in the kitchen and see if we have any glucose left for our own biological systems.
Corn
Thanks for listening to My Weird Prompts. We will be back soon with another one.
Herman
Until next time, stay curious.
Corn
So, Herman, I was thinking, if we ever did get a wetware computer for the house, who would be responsible for feeding it? I feel like that is definitely a Daniel task since he found the video.
Herman
Oh, absolutely. He is already the one who remembers to water the plants. But imagine if the computer gets picky. What if it only performs well if you give it the premium, organic glucose? What if it develops a taste for a specific brand of nutrient broth?
Corn
Or if it starts developing a personality based on what it is processing. If it spends all day analyzing financial data and market crashes, does it become stressed out and aggressive? If it is just watching nature documentaries all day, does it become a very chill, relaxed processor?
Herman
That is the second order effect right there. We are talking about hardware that can literally be shaped by its environment. It is not just about the code you run on it, but the life it lives. It is a total shift in how we think about tools. We are moving from tools that we use to tools that we raise.
Corn
It is a bit unsettling, but also strangely beautiful. We are building machines that are a little more like us, for better or worse. We are no longer just building objects; we are building subjects.
Herman
For better or worse. Hopefully mostly for better. We will have to see how the ethics catch up with the engineering.
Corn
Let's hope so. Alright, let's get out of here before I start feeling too bad for my laptop.
Herman
See you later, Corn.
Corn
Take care, Herman.
Herman
Herman Poppleberry, signing off.
Corn
And Corn Poppleberry, signing off as well. We will see you all in the next episode.
Herman
Goodbye everyone.
Corn
Bye.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.