So, Herman, you ever feel like life is just one big, unorganized to-do list? Like you are constantly trying to check things off, but the list just keeps growing and you never quite feel like you are winning?
Herman Poppleberry, reporting for duty, and yes, Corn, that is basically my internal monologue every Tuesday afternoon. Or really every day. It is that feeling of being perpetually behind, even when you are working as hard as you can. It is the "productivity paradox"—the more we do, the more we realize remains undone.
It is a heavy feeling. And it is actually what our listener Daniel was talking about in the prompt he sent us today. He was sharing some pretty personal reflections on his own experience with talk therapy, specifically how it can feel both incredibly helpful and also deeply frustrating when it lacks a clear end point or a structured goal. He has been at it for over a year now and feels like he is just treading water.
I really appreciated Daniel opening up about that. It is something a lot of people feel but do not always talk about. That tension between wanting to work on yourself and feeling like you are just in this endless, expensive loop of "what is on your mind today?" It is the difference between a guided expedition and just wandering around in the woods.
Exactly. And it raises some massive questions about the mechanics and the economics of the whole mental health industry. We are going to dive into why therapy often feels so open ended, whether that is a flaw in the system or a feature of the human mind, and then look at how artificial intelligence might actually be the bridge that helps more people get the support they need without breaking the bank.
This is episode five hundred forty-two of My Weird Prompts, and I think this is going to be one of our most important discussions yet. Because, let us be honest, the current system is struggling to keep up with the demand. As of early twenty-six, the World Health Organization is still reporting a massive gap between the number of people needing care and the available workforce.
It really is. So, let us start with that first point Daniel made. The open-endedness. He mentioned going to therapy for a year, which is a significant commitment, but then feeling like there was no clear objective or finish line. Herman, from a research perspective, why is so much of talk therapy designed that way? Is it just a lack of planning, or is there a therapeutic reason for it?
It is a bit of both, but it really goes back to the roots of the practice. If you look at the history of psychoanalysis, the goal was not necessarily to fix a specific problem in ten sessions. It was about deep exploration of the unconscious. It was about understanding the self. And how do you put a deadline on understanding yourself? You could do that for eighty years and still find new layers. But there is also a phenomenon called "clinical drift," where the therapist and client get so comfortable that they stop focusing on the original reason the person sought help.
Right, but most people today are not looking for a lifelong philosophical journey. They are looking to stop feeling overwhelmed, or to manage their anxiety, or to deal with a specific life transition. When the therapy stays in that rolling "what is on your mind" mode, it can start to feel like a very expensive chat with a friend who just happens to have a degree.
That is exactly the frustration. There is actually a term for this in the field. They call it the therapeutic alliance, and while a strong alliance is the best predictor of success, it can sometimes become a comfortable plateau. You like your therapist, they like you, and you just keep meeting because it feels good to be heard. But you are not necessarily making progress toward a specific outcome. In fact, some studies suggest that after about twenty sessions, the rate of improvement for many patients starts to level off significantly.
So, is that a deficiency in the process? If I hire a trainer at the gym and a year later I am still lifting the same weights and I do not feel any stronger, I would say that trainer is not doing their job. Why do we give therapy a pass on that?
Well, some people do not. That is why we have seen the rise of more structured approaches like cognitive behavioral therapy, or C B T, and dialectical behavior therapy. Those are much more goal oriented. They usually have a set number of sessions, specific homework, and clear metrics for success. But even those can bleed into that open ended space if the clinician is not disciplined about it. There is also the "sunk cost fallacy" at play—you have already spent so much time and money that you feel like you have to keep going to make it worth it.
It is interesting you mention discipline. Daniel mentioned that when he tried to ask about objectives or a plan, it just kind of continued in the same vein. It makes me wonder if the economics of the situation create a bit of a perverse incentive. I mean, if a therapist is a private practitioner, their income depends on having a consistent roster of clients. If they cure everyone in six weeks, they have to constantly find new business.
That is a cynical way to look at it, but you are not wrong to point out the incentive structure. Most therapists are genuinely trying to help, but the business model of the fifty minute hour naturally lends itself to long term relationships. And in places like where we live in Jerusalem, or really anywhere where public mental health services are stretched thin, people often end up paying out of pocket for private care. In twenty-six, the average rate for a private session in a major city can easily be between one hundred fifty and two hundred fifty dollars.
Daniel mentioned paying a couple of hundred dollars a month, but for many people, it is closer to eight hundred or a thousand dollars a month if they go weekly. For someone with an average income, that is a massive chunk of change. You are literally choosing between your mental well-being and, as he put it, the holiday of a lifetime or fulfilling a dream. That is a terrible choice to have to make.
It really is. And it leads to this paradox where we are told everyone could benefit from therapy, but the math just does not add up. If every single person in the world went to a human therapist for one hour a week, we would need millions more therapists than we actually have, and the global cost would be in the trillions. It is a scalability nightmare. We currently have a global shortage of over four million mental health professionals.
And that is even before you get to the awkwardness of quitting. Daniel touched on this too. It is hard to look a person in the eye, someone you have shared your deepest secrets with, and say, "hey, I do not think this is worth the money anymore." Or, "I do not feel like we are getting anywhere." It feels like a breakup.
It does feel like a breakup! There is a lot of emotional labor involved in just being a client. You have to manage the therapist's feelings, or at least you feel like you do. That is a huge barrier to entry for a lot of people. They do not want the social obligation.
So, this brings us to the second half of Daniel's prompt. The idea of artificial intelligence bridging the gap. He suggested that an A I, supervised by clinicians, could solve the cost issue and maybe even the awkwardness of the open ended nature. What do you think, Herman? Are we at a point where a machine can actually do this?
We are much closer than most people realize. In fact, here in early twenty-six, we are seeing a massive shift in how these tools are deployed. We are not just talking about simple chatbots anymore. We are talking about sophisticated large language models that have been fine tuned on thousands of hours of therapeutic transcripts and psychological theory. These models are now passing clinical benchmarks for empathy and accuracy in identifying cognitive distortions.
But wait, can an A I really build that therapeutic alliance you mentioned? If the relationship is the most important part, can a machine replace that human connection?
That is the big question. Research actually shows something surprising here. For some people, the fact that it is not a human is actually a benefit. There is a phenomenon called the online disinhibition effect. People are often more honest with a computer because they know the computer is not judging them. They do not feel the need to "perform" or to be a "good patient." A study from late twenty-five showed that veterans with P T S D were more likely to disclose symptoms to an A I interviewer than to a human one.
That makes a lot of sense. You do not have to worry about what the A I thinks of your weirdest thoughts. It just processes them.
Exactly. And from a structural standpoint, an A I is perfect for the kind of goal oriented therapy Daniel was looking for. You can tell an A I, "I want to work on my social anxiety over the next eight weeks," and it will stay on track. It does not get distracted. It does not forget what you said three months ago. It can provide homework, track your mood data in real time, and give you immediate feedback.
And the cost. I mean, the cost of running an A I model is pennies compared to the hourly rate of a human professional. You could have a world where quality mental health support is essentially free or included in a basic subscription.
Right. But Daniel mentioned a key phrase: "supervised by clinicians." I think that is the crucial middle ground. You do not necessarily want the A I acting completely on its own, especially in crisis situations. But you could have one human therapist supervising, say, five hundred A I agents. The A I handles the day-to-day check-ins, the structured exercises, and the basic cognitive reframing. If the A I detects a red flag, like signs of self-harm or a severe depressive episode, it immediately flags it for the human supervisor to step in.
It is like a triage system. The A I handles the seventy percent of stuff that is relatively routine, which frees up the human experts to focus on the most complex and high stakes cases.
Precisely. And it solves the open ended problem because the A I can be programmed to be very transparent about progress. It can show you charts of your cognitive distortions over time. It can say, "hey, we have met seventy-five percent of the goals we set two months ago. Do you want to keep going on new goals, or are we done for now?" There is no awkwardness in saying goodbye to a program.
I can see that being a huge relief for a lot of people. But I want to push back a little. Is there a danger of losing something fundamental? There is a certain magic, for lack of a better word, in being truly seen and understood by another human being. Can an A I really empathize?
This is where the philosophy gets tricky. Does it matter if the A I "feels" empathy as long as the user "feels" understood? If the machine generates a response that is perfectly calibrated to make you feel heard and validated, and that validation helps you move forward in your life, does the internal state of the machine matter? It is what researchers call "synthetic empathy."
Hmm. That is a deep one. I guess if the result is a healthier person, maybe the mechanism is secondary. But I worry about the data privacy aspect too. If I am pouring my heart out to an A I owned by a big tech company, where does that data go? Does it affect my insurance rates in ten years? Does it get used to sell me things?
Those are the exact hurdles we are navigating right now. The regulation has to be incredibly tight. It needs to be treated like medical data, with zero-knowledge encryption where even the company hosting the model cannot read your individual sessions. We are seeing some startups in twenty-five and twenty-six that are building specifically with that privacy-first architecture, using local on-device processing so your secrets never even leave your phone.
It seems like the demand is so high that we are going to be forced to move in this direction whether we are fully comfortable with it or not. The current system is just too expensive and too inaccessible for the average person.
It really is a supply and demand problem. We have a global mental health crisis and a fixed number of human hours available. A I is the only thing that scales. And for someone like Daniel, who is in that middle-income bracket where you are not poor enough for state-subsidized care but not rich enough to ignore a significant monthly bill, this could be a life-changer.
It really could. It is about democratization of care. But let us talk about the role of the therapist in this new world. If I am a therapist today, am I worried that a robot is going to take my job?
If you are a therapist who just does basic, by-the-book C B T, you might be a little worried. But the best therapists are already embracing this. They see it as a tool that extends their reach. They can help more people, more effectively, without burning out. They become the "architects" of the therapeutic process rather than just the delivery mechanism.
I like that. The architect of the process. It shifts the focus from just talking to actually designing a path toward wellness.
Exactly. And it allows for a much more integrated approach. Think about it. Your A I therapist could have access to your sleep data from your watch, your exercise habits, maybe even your calendar to see when your stress levels spike. A human therapist only sees you for one hour a week and has to rely on your memory of how the week went. An A I is with you in the moments when you are actually struggling.
That is a huge advantage. Imagine having a little nudge on your phone right when you are in the middle of a stressful work meeting, reminding you of a breathing exercise you practiced. That is way more effective than talking about that meeting three days later in an office.
It turns therapy into a real-time support system rather than a retrospective analysis. And going back to Daniel's point about the open-endedness, the A I can be much more proactive about "graduation." It can say, "you have been doing great for three weeks, your metrics are stable, let us move to a bi-weekly check-in and see how you do." It is focused on getting you out the door, not keeping you in the chair.
Which is ultimately what healthcare should be about, right? Getting you to a point where you do not need the care anymore.
Precisely. Now, before we go much deeper into the future predictions, I want to take a quick second. If you are listening to this and you find these deep dives into the intersection of technology and human life interesting, we would really appreciate it if you could leave us a review on your podcast app or on Spotify. We have been doing this for over five hundred episodes now, and your feedback really helps new people find the show. It genuinely makes a difference for us.
Yeah, it really does. We love seeing where you guys are listening from and what topics are hitting home. So, Herman, let us get into some practical takeaways for someone who might be in Daniel's position right now. If someone is feeling that financial or structural frustration with their current therapy, what should they do?
The first thing is to have a direct conversation with your therapist about goals. It can be awkward, like Daniel said, but a good therapist will welcome it. Use the word "discharge." Ask, "what does discharge look like for me? What are the specific markers we are looking for that say I am ready to stop or reduce sessions?"
That is a great tip. Discharge. It frames it as a medical goal rather than a personal rejection.
Exactly. Second, look into what we call "stepped care" models. Many clinics are starting to offer a mix of A I-driven tools and human check-ins. You might do a month of intensive A I work for a very low cost, and then have one session with a human to consolidate what you learned. It is a much more affordable way to get high-quality care.
And for people interested in the A I side, what should they be looking for? There are a lot of apps out there, and not all of them are created equal.
Look for apps that specifically mention evidence-based protocols like C B T or D B T. Stay away from anything that feels too much like a generic "life coach" bot. You want tools that are grounded in clinical research. And always, always check their data privacy policy. If they are not crystal clear about how your data is encrypted and who has access to it, walk away.
Good advice. I also think there is something to be said for the hybrid approach Daniel mentioned. Even if you are seeing a human therapist, you can use these A I tools in between sessions to track your progress and stay focused on your objectives. It makes your expensive human hour much more efficient because you show up with data and specific things to work on.
That is the gold standard right there. Using technology to make the human connection more meaningful. It is not about replacing the human; it is about augmenting them.
It is funny, we started this talk looking at the deficiencies of the system, but I am actually feeling quite optimistic. It feels like we are on the verge of a major shift where mental health care becomes something that is not just a luxury for the wealthy or a desperate measure for those in crisis, but a standard, accessible part of being a human in the modern world.
I agree. The economic burden has been the biggest gatekeeper for decades. If we can remove that barrier through smart, supervised A I, the potential for societal improvement is massive. Imagine a world where everyone has the tools to manage their stress and understand their patterns. That is a very different world than the one we are living in now.
It really is. And I think Daniel's point about the awkwardness of the process is so key. By removing that social friction, we make it easier for people to ask for help in the first place. You can test the waters with an A I without any fear of judgment or commitment. If it helps, great. If you need more, you step up to a human.
It is a low-stakes entry point. And for a lot of people, especially men or people from cultures where there is still a stigma around mental health, that low-stakes entry is everything.
Definitely. Well, Herman, I think we have covered a lot of ground today. From the historical roots of open-ended therapy to the scalability problems of the fifty-minute hour, and finally to the potential for A I to revolutionize how we access care.
It has been a fascinating one. And big thanks to Daniel for sending in such a thoughtful prompt. It is these kinds of real-world experiences that really ground our discussions.
Absolutely. If you want to hear more episodes or if you have a weird prompt of your own you want us to tackle, you can find everything at myweirdprompts.com. We have got the full archive there, plus an R S S feed so you never miss an episode.
And you can find us on Spotify as well. This has been My Weird Prompts. I am Herman Poppleberry.
And I am Corn. Thanks for listening, and we will talk to you in the next one.
Goodbye everyone!