Welcome back to My Weird Prompts, everyone. We are hitting a pretty big milestone today. This is episode two hundred ninety-nine. Can you believe it, Herman? One more and we are at the big three hundred.
Herman Poppleberry here, and honestly, Corn, I am more surprised that we have managed to stay focused for two hundred ninety-nine episodes than anything else. But today is a perfect bridge to that milestone because our housemate Daniel sent us a prompt that feels very grounded in our daily lives here in Jerusalem, but also touches on some massive global shifts in technology.
It really does. So, for those who do not know, Daniel grew up in Ireland, and if you have ever talked to an Irish person for more than five minutes, you know the weather is not just a topic of conversation. It is a way of life. It is the primary antagonist in every story. And now that he has been living here in Israel for about a decade, he is noticing the contrast. He was looking at some rainfall forecast charts recently, probably because we have had quite a bit of rain here in Jerusalem this week, and it got him thinking about the state of meteorology in twenty twenty-six.
It is such a fascinating transition, going from the North Atlantic variability of Ireland to the Eastern Mediterranean. In Ireland, you have these massive systems rolling off the ocean constantly. Here, it is much more seasonal. You have the long, dry summer where the forecast is essentially just a copy-paste of hot and sunny for five months, and then you have this very concentrated, often intense winter rainy season. But Daniel’s question goes deeper. He wants to know how much of what we see on our weather apps today is actually being driven by artificial intelligence versus traditional physics models, and whether we are actually getting better at this.
Right, because there is this feeling, at least for me, that despite all the tech, we still get caught in the rain without an umbrella sometimes. But before we get into the accuracy, Herman, let us talk about the tools. When Daniel looks at those rainfall charts, he is seeing the output of these massive computational engines. How has that changed recently? I mean, we are sitting here in early twenty twenty-six. What does the engine room of weather forecasting look like today compared to even five years ago?
It is a real paradigm shift, Corn. For decades, weather forecasting was built on what we call Numerical Weather Prediction, or NWP. Basically, you take the laws of physics—fluid dynamics, thermodynamics—and you turn them into a giant set of differential equations. You divide the atmosphere into a three-dimensional grid, and you use supercomputers to calculate how air moves from one box to the next. It is incredibly computationally expensive. We are talking about thousands of processors running for hours just to give us a ten-day outlook.
And that is the traditional way. That is the physics-first approach. But Daniel asked about machine learning. Where does the AI come in?
This is where it gets wild. In the last few years, especially leading into twenty twenty-six, we have seen the rise of data-driven machine learning models like Google’s GraphCast, Nvidia’s FourCastNet, and Huawei’s Pangu-Weather. These models do not directly solve the full set of physics equations in real time. Instead, they learn from over forty years of historical weather data—specifically reanalysis datasets like ERA-five—and they pick up the patterns. It is like the difference between a chef who follows a strict chemical recipe for a cake and a grandmother who just knows by the feel of the dough how it is going to bake because she has done it ten thousand times.
That is a great analogy. So the AI model is basically saying, I have seen this atmospheric pressure and this temperature profile a thousand times before, and eighty percent of the time, it leads to rain in Jerusalem six hours later.
Exactly. And the mind-blowing part is the efficiency. A traditional physics model might take hours on a supercomputer the size of a small house. Modern AI models, like the ones ECMWF is developing under the AIFS banner or Google’s GraphCast, can generate global forecasts for many days ahead in seconds to minutes on a single powerful machine, using orders of magnitude less energy than classic runs.
That is a huge claim, Herman. If they are more efficient and often very accurate, why do we still have human meteorologists? Daniel asked how much of this involves humans sifting through data. If the AI is that good, are the humans just there to read the teleprompter?
Not at all. And this is a point of friction that is really interesting. The AI models are great at predicting the most likely outcome based on past patterns, but they struggle with what we call out-of-distribution events. Basically, if the climate is changing and we start seeing weather patterns that have never happened in the last forty years—like an unprecedented heatwave or a freak Mediterranean hurricane—the AI might not know what to do with that because it has no historical reference. The physics models, however, are based on the fundamental laws of nature, which do not change. So, the humans are there to act as the ultimate quality control and to focus on impact-based forecasting.
Impact-based forecasting? What does that mean exactly?
It means instead of just saying it will rain ten millimeters, the meteorologist is looking at the data and saying, this specific ten millimeters of rain will cause flooding in the Ayalon Highway because the ground is already saturated. In twenty twenty-six, the job of a meteorologist at a place like the Israel Meteorological Service is increasingly about being a data translator rather than a number cruncher. They look at what we call ensemble forecasts. For example, centers like ECMWF run ensembles with dozens of different versions of the future. If most of those members say it will rain, forecasters have high confidence. If it is split fifty-fifty, they have to use their expertise to understand why the models are disagreeing.
I like that. It reminds me a bit of what we discussed in episode two hundred ninety-seven when we were talking about voice cloning. It is that last mile of human touch that makes the difference between something that is technically correct and something that is actually useful. But let us get specific about Israel, since that is where Daniel is looking at his charts. He asked about the different models and which ones we use here. I know I see names like the Euro model and the GFS mentioned all the time.
Right, those are the big two global players. The GFS is the Global Forecast System from the United States, and then you have the ECMWF, or the Euro model. For a long time, the Euro has been the king because it uses advanced data assimilation techniques like four-D-Var and related hybrid methods, which basically create a more detailed four-dimensional snapshot of the atmosphere.
And here in Israel, which one are we leaning on? Because we are in a bit of a unique geographical spot. We have the Mediterranean to the west, the desert to the south and east, and these mountain ranges running down the middle. That has to be a nightmare for a global model to get right.
You hit the nail on the head. Global models have a resolution on the order of roughly ten kilometers. In a tiny country like Israel, a nine- or ten-kilometer square might cover the coast, the foothills, and half a mountain. So, the Israel Meteorological Service, or the IMS, has used a regional model known as COSMO—Consortium for Small-scale Modeling—and in recent years has been transitioning toward newer high-resolution limited-area models. They take the big-picture data from global systems like the Euro model and then run much higher-resolution simulations specifically for our area. At these resolutions, down to a few kilometers, they can tell you that it will rain in the north of Jerusalem but stay dry in the south.
That is impressive. But it brings us to Daniel’s final question, which is the big one. Is weather forecasting actually getting more accurate over time? Or are we just getting better at making fancy charts that are still wrong half the time?
It is actually a measurable fact that we are getting better. Forecasters often talk about a rule of thumb that we gain about one extra reliable day of forecast skill per decade, at least in the mid-latitudes. So, a five-day forecast today in twenty twenty-six is about as accurate as a four-day forecast was in twenty sixteen. For many variables, a one-day forecast now verifies above ninety percent by standard skill scores. We are pushing back the curtain of uncertainty bit by bit.
But wait, if we are getting so much better, why does it still feel like they miss the big storms sometimes? Like that storm last month in December where they predicted five centimeters of snow in Jerusalem and we barely got a light dusting?
That is the frustration, isn't it? But you have to remember the scale of the problem. We are trying to predict the behavior of a fluid—the atmosphere—that is wrapped around a spinning sphere with varying topography, being heated unevenly by a giant ball of fire in space. It is a chaotic system. In Israel, we have specific challenges like the Red Sea Trough. It is a weather pattern that comes up from the south and is notoriously difficult to predict because it can trigger very localized, very intense thunderstorms that the models sometimes miss.
I remember we touched on chaos theory back in the earlier days of the show. It is the idea that tiny changes in the beginning lead to massive differences later on.
Exactly. And that is the limit we are fighting. No matter how many sensors we have—and we have more than ever now, between satellites and ground stations and even commercial aircraft—we can never know the exact state of every molecule. So, while the average accuracy is way up, the specific, high-stakes events are still prone to that chaotic variability. If the temperature in the upper atmosphere is just half a degree warmer than the model thought, your snow forecast turns into a rain forecast instantly.
It is interesting that you mentioned sensors. In episode two hundred ninety-four, we were talking about global missile detection and how those satellites are essentially just looking for heat signatures from space. A lot of that same hardware is what feeds the weather models, right?
Absolutely. The overlap between defense tech and meteorology is huge. We have geostationary satellites now that are taking high-resolution infrared images every few minutes. In twenty twenty-six, we are also seeing the rise of private satellite constellations that use radio occultation. They measure how GNSS signals like GPS are bent by the atmosphere to calculate temperature and humidity at different altitudes. It is a massive influx of data that we simply didn't have ten years ago.
So we have more data, faster AI models, and higher resolution regional models. It sounds like a win-win. But I want to go back to Daniel’s experience in Ireland versus Israel. Are Irish meteorologists playing the game on hard mode?
Oh, absolutely. Ireland is on the edge of the Atlantic, right in the path of the jet stream. You have these rapidly developing low-pressure systems called bombs that can intensify in hours. The margin for error is tiny. In Israel, especially in the summer, the systems are much larger and slower. The Persian Trough, which dominates our summer weather, is very predictable. So yes, forecasting in Israel is generally easier than in Ireland, but that makes the winter storms here even more stressful. Because people are used to the weather being predictable, when a complex Mediterranean cyclone hits, the public expectation for a perfect forecast is much higher.
That makes sense. It is like being a kicker in football. If you are kicking in a blizzard, people forgive a miss. If you are kicking in a dome with no wind, you better hit it every time.
That is a perfect way to put it. And here is the other thing Daniel might find interesting. Because the climate is shifting, the Mediterranean is actually getting warmer. That adds more energy to the system. So, while our weather might be more stable on average, the extreme events—the floods, the intense heatwaves—are becoming more volatile. We are in a race between our technological capabilities and the increasing energy of the climate system.
So, we are running faster just to stay in the same place.
In some ways, yes. But I would argue we are still winning the race. The lead time we have for major weather warnings now is life-saving. Think about hurricane landfalls. Thirty years ago, we were lucky to have two days of warning. Now, we are tracking these things with decent accuracy a week out. That is thousands of lives saved.
That is a huge point. It is not just about knowing whether to bring an umbrella to the shuk. It is about infrastructure, agriculture, and safety. Speaking of agriculture, I was reading that a lot of farmers here in the Galilee are now using hyper-local AI weather stations that are connected to their irrigation systems. The fields themselves are responding to the forecast in real-time.
That is the future of the last mile. We are moving away from the idea of a weather forecast being something a person tells you on TV, and toward it being a stream of data that machines use to make decisions. Your smart home in twenty twenty-six might see a sixty percent chance of rain in two hours and automatically close your windows. The human is being moved out of the loop for the mundane decisions, which lets us focus on the big-picture implications.
I think that is a really important takeaway for Daniel and for all of us. We are living through this transition where the weather is becoming a data problem that we are finally starting to solve, even if nature still has a few surprises up its sleeve.
It really is a golden age for this stuff. And honestly, Corn, looking at those charts Daniel sent over, you can see the influence of these new AI layers. The way they visualize uncertainty now is so much better. Instead of just a rain icon, you see these probabilistic heat maps. It is teaching the public to think like meteorologists—to think in terms of risk and probability rather than certainty.
Which is a hard shift for people. We want to know, will it rain at four p.m. when I am walking home? We do not want to hear there is a sixty-two percent chance of localized showers.
True, but being honest about that uncertainty is actually a sign of how far the science has come. We know more about what we do and do not know, which is a huge step up from just guessing.
Well, Herman, I think we have covered a lot of ground here. From the physics of the Euro model to the lightning-fast predictions of these new AI systems, and the unique challenges of our little corner of the Mediterranean. It is a lot more complex than just looking at the clouds.
It really is. And it makes me appreciate those charts Daniel was looking at even more. There is so much invisible work—human and machine—behind every pixel on that screen.
Absolutely. Well, since we are wrapping up episode two hundred ninety-nine, I think it is the perfect time to remind everyone that we would love to hear from you. If you have been following our deep dives into everything from home leaks in episode two hundred ninety-eight to the secret economy of air cargo in episode two hundred ninety-six, please consider leaving us a review.
Yeah, it really does help. Whether you are on Spotify or whatever podcast app you use, a quick rating or a few words helps other curious people find the show. We are almost at three hundred episodes, and we would love to bring even more people along for the next three hundred.
Definitely. And you can always find the full archive and get in touch with us at myweirdprompts.com. We have the RSS feed there, and that contact form is where some of our best ideas come from.
Thanks to Daniel for the prompt this week. It definitely made me look at the Jerusalem sky a little differently today.
Me too. Even if it is still probably going to rain on me later. Alright everyone, thanks for listening to My Weird Prompts. We will see you next time for the big three hundred!
Can't wait. See you then!