Herman, I saw a guy at the airport yesterday doing the digital sandwich. You know the look, holding the phone like a slice of pizza, shouting at a cursor that refused to move while everyone else in the terminal learned about his third quarter sales projections. It is a look that says, I have given up on the future.
It is a classic look, Corn, and honestly, it is a symptom of a deeper problem. We have been trying to force general purpose hardware to do a very specific, high latency job for too long. Today's prompt from Daniel is about finally killing that pizza slice posture with a dedicated voice keyboard hardware device. He is looking at the technical feasibility of a portable unit with an onboard neural processing unit for local speech to text.
I love that we are calling it a voice keyboard. It implies it is not just a gadget, but a fundamental input device. It is like Daniel realized that if our fingers get a dedicated set of mechanical switches, our vocal cords deserve some silicon love too.
The timing is actually perfect. If we had tried to build this even two years ago, we would have been stuck in the cloud dependency trap. But here in March of twenty twenty-six, the landscape has shifted entirely. We are seeing this intersection of the hardware is back trend and some massive breakthroughs in local speech recognition models.
You are talking about the Moonshine models, right? I have been seeing that name pop up everywhere lately. It sounds like something brewed in a basement, but apparently, it is lighting up the benchmarks.
It really is. Pete Warden and his team at Moonshine AI have done something incredible. They released the Moonshine suite back in late February, and it has basically turned the edge AI world upside down. To give you an idea of the scale, the Moonshine Tiny model has twenty-seven million parameters and sits at about twenty-six megabytes. On a standard Raspberry Pi, it can process audio in about two hundred thirty-seven milliseconds.
Wait, hold on. Two hundred thirty-seven milliseconds? I remember we were talking about OpenAI's Whisper Tiny on similar hardware not that long ago, and it was taking, what, six or seven seconds?
It was closer to six seconds, which, in the world of human conversation, is an eternity. If you have to wait six seconds for your words to appear on the screen, you are not dictating, you are sending a telegram to the past. Moonshine is roughly twenty-five times faster. It is the difference between a tool that feels like an extension of your brain and a tool that feels like a chore.
So, if the software is finally fast enough, let us talk about the bones of this thing. Daniel's concept is a dedicated device that plugs in via USB or connects over Bluetooth. He wants it to act as a human interface device, basically tricking the computer into thinking a very fast typist is at the controls. How do we actually build the proof of concept without it becoming a bulky science project?
That is where the hardware selection gets interesting. For the core controller, you would probably look at something like the ESP32-S3 or the RP2040. These are inexpensive, highly capable microcontrollers that have excellent support for USB human interface device emulation. They can handle the basic task of telling the host computer, hey, I am a keyboard, here comes some text.
But those chips alone cannot run a twenty-six megabyte neural network at two hundred milliseconds, can they? They are great for blinking lights, but maybe not for heavy lifting.
They definitely need a partner. You would pair that microcontroller with a dedicated neural processing unit. In the past, everyone reached for the Google Coral Edge TPU, but that is officially legacy hardware now. The Coral has a strict model size limit of six megabytes. Since even the smallest Moonshine model is twenty-six megabytes, the Coral is a non-starter for this project.
So the Coral is the floppy disk of the AI world now. What is the modern equivalent? What is the gold standard for twenty twenty-six?
Right now, it is the Hailo-eight. Hailo is an Israeli company, and their modules are hitting about twenty-six tera operations per second. For a voice keyboard, you could even use the Hailo-eight-L, which is their lower power version. It can handle inference in about ten milliseconds for complex models. When you combine that with the Moonshine architecture, you are getting near instantaneous transcription.
I can see the appeal for corporate types. You mentioned the Citrix bypass in our notes, and that feels like the killer feature. If you work for a big bank or a government agency, you cannot just install some random open source dictation software on your locked down workstation. But the IT department usually does not block a standard USB keyboard.
That is the genius of the hardware approach. It is driverless. The host machine doesn't even know it is talking to an AI. It just sees a stream of keystrokes. This solves the remote desktop problem, the virtual machine problem, and the paranoid IT manager problem all in one go. You are moving the compute from the restricted environment to a sovereign piece of hardware in your pocket.
Sovereign hardware. You make it sound like a revolutionary movement, Herman. But honestly, in an era where every software company wants a subscription and a look at your data, owning the silicon that does the thinking is a big deal. It is that local sovereignty we talked about back in episode twelve sixteen.
It really is. And the privacy aspect cannot be overstated. If you are a lawyer or a doctor, you cannot be sending patient data or legal strategy to a cloud server in another state just to get it transcribed. With a device like this, the audio never leaves the box. The only thing that exits the USB port is the finished text. It is a privacy fortress.
Speaking of fortresses, I noticed iFLYTEK is already playing in this space with their Smart Recorder Pro. They are marketing it as a privacy-first device for professionals. But that is a closed ecosystem. Daniel's idea is much more aligned with the open source movement.
iFLYTEK is the benchmark for the high end market, but they are expensive and proprietary. An open source version using off-the-shelf components like the Hailo-eight or even the new MediaTek Genio three hundred sixty would be a game changer. MediaTek just announced that chip at Embedded World earlier this month. It is a six nanometer process with eight point five tera operations per second specifically designed for these kinds of audio tasks. It is cheap, it is efficient, and it is built for exactly this kind of edge AI.
I am curious about the power budget, though. If I am carrying this around, I do not want another thing I have to charge every two hours. You mentioned those analog in-memory chips. Do they actually live up to the hype?
The EnCharge EN100 is the one to watch there. They are claiming two hundred tera operations per second at about eight watts. For a voice keyboard that is mostly idling until you speak, you could easily get twenty plus hours of continuous use out of a small battery. It is using analog compute to bypass the traditional von Neumann bottleneck, which is basically a fancy way of saying it does not waste energy moving data back and forth between memory and the processor.
Okay, so the tech is there. The silicon is ready. The models are fast. But let us talk about the headache of actually bringing a product to market in twenty twenty-six. You mentioned the EU Cyber Resilience Act. That sounds like a lot of paperwork for a guy who just wants to build a cool microphone.
It is a massive hurdle for independent hardware. Starting in September of this year, any hardware sold in the European Union has to provide a full software bill of materials and have a plan for twenty-four hour vulnerability reporting. If you are a solo developer, that regulatory burden is a nightmare. It is basically the end of the era where you could just toss a cool gadget on Tindie and see what happens.
It is funny how regulation always seems to favor the big players who can afford a room full of compliance lawyers. It makes the open source, build it yourself path even more important. Maybe the move isn't a finished product, but a really well-documented reference design that people can assemble themselves.
That might be the most viable path forward for a project like this. Provide the PCB files, the firmware, and the model weights, and let the community build it. It avoids the regulatory trap while still solving the problem for the power users.
Let us dig into the personal dictionary feature Daniel mentioned. That is something software dictation always struggles with. I have a weird last name, you have a weird middle name, and we both use a lot of technical jargon. How does a local hardware device handle a custom lexicon better than a cloud service?
It comes down to the decoding phase of the speech to text process. Most of these models, like Moonshine or Whisper, use a system where they predict the next token based on probability. If you have a local user lexicon stored on a MicroSD card inside the device, you can tell the engine to heavily weight specific terms. When it hears something that sounds like Poppleberry, it does not default to popular berry, it knows to pick the specific name because it is in your personal dictionary. Since it is local, you can update that list instantly without waiting for a cloud model to fine tune.
So I could have a specific dictionary for when I am coding in Rust, and another one for when I am writing a grocery list. That is actually incredibly useful. It is like having a stenographer who actually knows what you are talking about.
And you can take that stenographer with you to any machine. You plug it into your laptop, it knows your words. You plug it into your desktop, it still knows your words. No syncing, no logging in, no accounts.
I do want to touch on the vibe coding trend you mentioned. There is a lot of noise in the open source world right now with these AI-generated repositories that look great on the surface but are basically hollow shells. How does a project like this avoid being labeled as disposable hardware?
Longevity is the key. A lot of these recent edge AI startups are shipping hardware that is basically a thin wrapper around a specific version of a model. If that model gets deprecated or the company goes bust, the hardware becomes a paperweight. To make this viable, the firmware has to be open and modular. You need to be able to swap out the Moonshine Tiny model for a Moonshine Medium or whatever comes next in twenty twenty-seven.
It is about building a platform, not just a product. If the hardware is basically just a high quality microphone, a neural processing unit, and a microcontroller, the community can keep the software side alive forever.
That is the goal. And we are seeing better tools for this now. Nordic Semiconductor just released the nRF9161 development kit. It integrates the neural processing unit, Wi-Fi, and Bluetooth into one package. It is designed for long term industrial use, so it is a much more stable foundation than some of the fly by night chips we saw a few years ago.
So, if I am Daniel and I want to build a proof of concept this weekend, where do I start? What is the actual assembly list?
Start with an ESP32-S3 for your brain. It is cheap, it has built in USB support, and the community support is legendary. For the AI muscle, get a Hailo-eight-L M-two module. You can interface those two using a relatively simple carrier board. For the microphone, do not skimp. Use a high quality MEMS microphone array to give the model the cleanest possible signal.
And for the software?
Use the Moonshine Tiny weights. Pete Warden's team has made them very easy to deploy on edge hardware. You can use a library like whisper dot c-p-p, which has been updated to support Moonshine architectures, to handle the heavy lifting. Within a few days, you could have a device that types what you say into a notepad on your computer with zero software installed on the PC.
It sounds like a fun project, but let us be honest, Herman. Is this a real product? Or is it just a very expensive way to avoid typing?
I think the Citrix bypass makes it a real product. There are millions of workers in finance, healthcare, and government who are currently trapped in the digital sandwich because their work computers are locked down. They would pay a premium for a hardware device that just works. It is the same reason people still buy high end mechanical keyboards. If you spend eight hours a day interacting with a machine, the interface matters.
I can see that. It is about reducing the friction between thought and text. If I can speak at a hundred and fifty words per minute and have it appear instantly and accurately, that is a massive productivity gain. Much better than my current strategy of typing with two fingers and hoping for the best.
And we are seeing the market move this way. The global edge AI market is projected to hit nearly a hundred and nineteen billion by twenty thirty-three. IDC called twenty twenty-six the inflection point where companies realized that moving reasoning to the edge isn't just about privacy, it is about cost management. Cloud inference fees are skyrocketing. If you can do the transcription on a twenty dollar chip instead of a five cent per minute API, the hardware pays for itself in a few months.
That is a very conservative, practical way of looking at it, Herman. I like it. Save money, stay private, and stop looking like you are eating your phone. It is a win for everyone.
There is one more thing to consider, though. The agentic future. We are moving toward a world where we don't just want transcription, we want action. We want to say, summarize the last ten minutes of this meeting and send the action items to Corn.
See, now you are making me do work. I was enjoying the conversation until you brought up action items.
But that is the tension! If the device is strictly local for privacy, how does it talk to your calendar or your email? You run into this wall where you have to decide: do I stay in my private fortress, or do I open a gate to the cloud to get those agentic features?
I think you keep the gate closed by default. Let the hardware handle the transcription, and let the user decide when to pipe that text into an agent. The voice keyboard should be a tool, not a tether.
That is a fair point. Keep the core functionality sovereign. If the user wants to copy-paste the text into a cloud LLM, that is their choice, but the device itself should not require it.
So, final verdict on Daniel's project? Is it a go or a no-go?
It is a strong go for a proof of concept. The pieces are all on the table for the first time. The models are small enough, the chips are fast enough, and the market frustration is high enough. If someone can navigate the regulatory mess in the EU or just focus on the North American market for now, there is a real opportunity here.
I agree. Even if it stays a niche hobbyist project, it is a vital one. It proves that we do not have to accept the subscription trap or the privacy trade-offs that big tech wants to force on us. Plus, I really want to see Herman Poppleberry walking around with a dedicated voice keyboard strapped to his belt like a high-tech walkie-talkie.
It would be a look, Corn. A very nerdy, very efficient look.
You would pull it off, buddy. You would pull it off. Well, I think we have given Daniel plenty to chew on. This feels like one of those projects that could actually make a dent in how we work.
I hope so. It is always exciting to see these technical threads start to weave together into something practical.
Well, that is probably a good place to wrap this one up. Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes.
And a big thanks to Modal for providing the GPU credits that power this show. They make the heavy lifting look easy.
This has been My Weird Prompts. If you are finding these deep dives useful, leave us a review on your favorite podcast app. It really does help other people find the show and helps us keep the lights on.
Find us at myweirdprompts dot com for the full archive and all the ways to subscribe.
See you in the next one.
Goodbye.