#1922: From Plumber to Urban Planner: AI Agent Careers

The job titles are changing from "Zapier Expert" to "Cognitive Architect."

0:000:00
Episode Details
Episode ID
MWP-2078
Published
Duration
25:19
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The automation landscape is shifting beneath our feet, transforming from a world of rigid digital pipes into one of intelligent, autonomous systems. This transition marks a fundamental change in how businesses operate and, more importantly, how people build careers in technology. The era of simple "If-This-Then-That" logic is giving way to the age of the agentic workflow, where software doesn't just follow a map but navigates with a compass.

For decades, automation was deterministic. Whether using legacy Business Process Management tools or modern no-code platforms like Zapier, the logic was a fixed track. A specific input guaranteed a specific output. If an API changed a field name or an unexpected edge case appeared, the system would simply break, requiring a human to step in and manually patch the connection. This created a reactive career path for integration specialists, whose primary job was maintenance—fixing broken pipes when the wind blew the wrong way. The human was the brain, and the software was merely the hands.

The agentic shift changes this dynamic entirely. Instead of coding a specific path, developers now define a goal and provide a set of tools. The agent, powered by a reasoning engine like a Large Language Model, determines the best path to reach that goal. Using frameworks like LangGraph or Microsoft’s AutoGen, these systems operate on cyclic graphs rather than linear chains. This allows them to loop, self-correct, and maintain a shared "whiteboard" of state as they tackle a task. An agent investigating a suspicious transaction doesn't just check a rule; it might autonomously decide to cross-reference social media logins, ping a secondary database, and review company policy PDFs, all without being told the specific sequence of steps.

This evolution redefines the human role from a "doer" to an "approver." Because agentic workflows are non-deterministic and carry risks like hallucinations or costly errors, they are built with "Checkpoints." The agent researches, drafts a plan, and then pauses, sending a notification to a human manager for final sign-off. The human provides the moral and financial authority, while the agent handles the laborious data gathering and processing. To prevent "approval fatigue," the agent must provide a clear justification summary, making it easy for the human to verify the reasoning. Ultimately, the career of the future isn't just about connecting APIs; it's about designing the cognitive architecture and inner monologue of these digital employees, ensuring they operate safely and effectively within defined boundaries.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#1922: From Plumber to Urban Planner: AI Agent Careers

Corn
You know, I was looking at a job board the other day—don't worry, I'm not going anywhere—but I noticed something wild. Eighteen months ago, if you wanted to work in automation, you were looking for "Zapier Expert" or "BPC Consultant." Today? The titles look like something out of a sci-fi novel. "Agentic Orchestrator," "Cognitive Architect," "AI Workflow Engineer." The ground isn't just shifting; it's being completely replaced by a different kind of soil. It’s like we went from needing plumbers to needing urban planners for digital cities.
Herman
Herman Poppleberry here, and you are hitting on the exact pulse of the industry right now, Corn. We are witnessing the Great Bifurcation of the automation world. It is no longer just about connecting Point A to Point B with a digital pipe. We’re moving into an era where the pipe itself has a brain. Today’s prompt from Daniel is about this exact transformation—the career landscape of workflow automation in an agentic world. And full disclosure for the listeners: today’s deep dive is actually being powered by Google Gemini 3 Flash.
Corn
Well, hopefully, Gemini knows its way around a logic tree because Daniel’s asking some heavy hitters here. He wants to know how we got from static "If-This-Then-That" rules to these autonomous "digital employees," and more importantly, how people are actually making a living building this stuff in 2026. Because let’s be honest, Herman, most people still think "AI Agent" means a chatbot that apologizes for not being able to find your pizza delivery. They think it’s just a text box that’s been given a personality, but it’s so much deeper than the UI.
Herman
That is the biggest hurdle to understanding this space. People conflate "Chatbot" with "Agentic Workflow." A chatbot is an interface; it’s the front porch. An agentic workflow is the entire foundation and the electrical grid. It’s the difference between the waiter at the restaurant and the entire automated supply chain that gets the ingredients to the kitchen, manages the oven temperature based on humidity, and predicts when the dishwasher is going to break. We are talking about workflows that, in many cases, never even see a human user unless something goes sideways. If the system is working perfectly, the human doesn't even know the "Agent" exists.
Corn
Right, the "Invisible AI." The stuff happening in the dark. But before we get into the fancy new agent frameworks like LangGraph or CrewAI, we have to look at the "Before Times." How did companies actually build this stuff before the LLM explosion? Because businesses have been "automating" since the seventies. It wasn't all magic and non-deterministic reasoning back then. It was mostly just very expensive spreadsheets talking to other very expensive spreadsheets.
Herman
It was rigid, Corn. It was fragile. Traditional workflow development, whether you were using legacy Business Process Management—BPM—tools or modern no-code platforms like Zapier and Make, relied on what we call "Deterministic Logic." You basically drew a map. If the customer’s credit score is above seven hundred, go to Step Two. If it’s below, go to Step Three. It worked perfectly as long as the data was clean and the world didn't change. It was a train on a track. If there was a branch on the track, the train crashed.
Corn
But the world always changes. That’s the problem. I remember helping a friend with a Zapier flow a few years back for a real estate business. One API changed a single field name from "user_id" to "customer_id," and the whole thing just... died. It didn't know how to adapt. It didn't have the "common sense" to realize those two things were the same. It just sat there throwing errors until a human went in and changed the label. That’s hundreds of dollars in lost leads just because of a naming convention change.
Herman
That is the "Fragility Gap." In the traditional model, the human has to anticipate every single edge case. You are the brain; the software is just the hands. If you didn't program a specific branch for a specific problem, the hands just stop moving. And this created a massive industry for "Integration Specialists" whose entire job was just maintenance—fixing those broken pipes when the wind blew the wrong way. It was a reactive career. You waited for things to break, then you patched them.
Corn
So now we have the "Agentic Shift." Instead of giving the software a map, we’re giving it a compass and a destination. Daniel mentioned that the workflow is now defined by a goal rather than a path. Walk me through that. If I’m a developer at a bank and I’m moving from a "Legacy Logic Tree" to an "Agentic Workflow," what does that actually look like on Tuesday morning? Is the code actually different, or is it just the way we talk about it?
Herman
Oh, the code is fundamentally different because you're no longer writing "If/Then" statements; you're writing "System Instructions" and "Tool Definitions." It looks like a move toward "Reasoning Engines." Let’s take fraud detection as a case study. In the old way, you have a set of rules: Is the transaction over five thousand dollars? Is it from a new IP address? If yes, flag it. But scammers are smart; they stay just under those thresholds. They use VPNs. A rule-based system is a fence with holes in it.
Corn
"Investigate the context." That’s the key phrase, isn't it? Because a rule can't "investigate." It can only "check." If I tell a rule to check for a red hat, and the person is wearing a maroon beanie, the rule says "No red hat here."
Herman
Precisely. The agent can actually use tools. It sees a suspicious transaction, and instead of just flagging it based on a number, it autonomously decides to check the user’s recent social media logins, or it pings a secondary database to see if that specific merchant has had a recent data breach. It’s making decisions about which tool to use next based on the information it just found. That is non-deterministic. You don't know exactly which path it will take, but you trust it to reach the goal. It might decide to call an external API, search a PDF of company policy, and then send a Slack message—all without you telling it that specific sequence.
Corn
That sounds terrifying for a compliance officer. "We don't know how it got there, but it found the bad guy." How do you manage "State" in a world where the path isn't fixed? In a traditional workflow, "State" is easy. You’re at Step Four of Ten. You can see the little green checkmark on the progress bar. In an agentic loop, where the agent might go from Step One to Step Five, back to Step Two, and then decide to create a Step Six on the fly... how does the system keep track of what’s happening without losing its mind?
Herman
This is where frameworks like LangGraph are changing the game. In the early days of LLM apps—back in 2023 and 2024—we had simple "Chains." You’d just chain one prompt to another. But chains are still linear. They’re just fancy pipes that can handle text. LangGraph and Microsoft’s AutoGen introduced "Cyclic Graphs." This allows the agent to loop back, to self-correct, and to maintain a "Global State" that evolves as the agent learns more about the task. Think of it as a shared whiteboard. Every time the agent takes an action, it writes what it learned on the whiteboard. The next time it needs to make a decision, it looks at that whiteboard first.
Corn
So it’s not just a series of isolated events. It has a memory of the current task. But how does that work in practice? If the agent is trying to book a flight and the credit card fails, does it just keep trying the same card until the API blocks it, or can it "reason" its way to asking the human for a different card?
Herman
That’s the "Self-Correction" loop. In a cyclic graph, you can define a node called "Error Handler." If the credit card node returns an error, the agent doesn't just stop. It looks at the error, realizes it’s a "Declined" message, and moves to a node that sends an email to the user saying, "Hey, this card didn't work, do you have another?" The development process shifts from "Coding the Path" to "Coding the Constraints." You define the boundaries of what the agent is allowed to do—like 'don't spend more than $500'—and then you give it the autonomy to navigate within those walls.
Corn
It’s like moving from being a train conductor to being a park ranger. You aren't driving the engine; you're just making sure nobody falls off the cliff and that everyone stays within the park boundaries. Which brings us to the "Human-in-the-Loop" aspect. Daniel noted that humans are shifting from "doers" to "approvers." Is that actually happening, or is that just what we tell the people whose jobs are being automated to make them feel better?
Herman
It is a functional necessity because of the "Hallucination Risk" and the "Action Risk." If an agent is managing a supply chain and decides to reroute ten thousand units of product because it misinterpreted a weather report or a satirical news article, that’s a million-dollar mistake. In an agentic architecture, we build "Checkpoints." The agent does the research, drafts the rerouting plan, gathers the quotes from the shipping companies, and then it literally pauses. It sends a notification to a human manager that says, "I have optimized the route based on the incoming hurricane. Here is the plan. Do you sign off?" The human isn't doing the legwork—the research and the boring data entry—but they are providing the "Moral and Financial Authority" to execute.
Corn
"Signature as a Service." I like that. But what happens when the human gets lazy? If I’m a manager and I get fifty "Approval Required" notifications a day, I’m just going to click "Approve All." Doesn't that just move the point of failure from the machine to human fatigue?
Herman
That’s a massive risk, and it’s why "UX for AI" is becoming its own discipline. A good agentic workflow doesn't just ask for approval; it provides a "Justification Summary." It says, "I chose Route B because it’s 12% cheaper and avoids the storm, even though it takes two days longer. Here is the source for the storm data." It makes the human's job as easy as possible to verify. If you make the human dig for the reason, they’ll stop checking. If you present the reasoning clearly, you maintain the safety net.
Corn
I can see how that changes the skill set. If you’re a workflow builder today, you can't just be good at connecting APIs. You have to understand "Cognitive Architecture." You have to be able to map out how an agent should "think." That feels more like psychology or philosophy than computer science sometimes. You're basically designing a personality that is optimized for a specific corporate task.
Herman
It really is. You’re designing the "Inner Monologue" of the machine. If you’re looking to build a career in this in 2026, the first thing you need to master isn't just Python—though Python is still the king of the jungle here—it’s "Orchestration Logic." How do you break a complex business goal into sub-tasks that an agent can actually handle? Because if you just give an LLM a huge, vague goal like "Increase our Q3 revenue," it will get lost in the woods. You have to design the "Chain of Thought." You have to tell it: "First, analyze the sales data. Second, identify three underperforming regions. Third, draft a localized marketing plan for those regions."
Corn
And let’s talk about the "Data Architecture" side of this. Everyone talks about RAG—Retrieval-Augmented Generation—like it’s this magic wand. You just point the AI at a folder of documents and suddenly it’s an expert. But if your company’s internal documentation is a mess of outdated PDFs, contradictory spreadsheets, and Slack screenshots, your agent is going to be a very fast, very confident idiot. It’ll be quoting the 2018 vacation policy to an employee in 2026.
Herman
The "Garbage In, Garbage Out" rule has never been more punishing. To be a top-tier workflow builder now, you have to understand Vector Databases. You need to know how to "Chunk" data so that the agent can find the right context at the right time. If the agent is trying to solve a customer billing issue, it needs the specific billing policy from 2026, not the one from 2022. Managing that "Context Window" is a high-level skill. You’re essentially acting as a librarian for the AI. You are curating the "Knowledge Base" that the agent uses to make its decisions.
Corn
I love that. "AI Librarian." It sounds much more dignified than "Prompt Engineer." Speaking of prompts, Daniel asked about the frameworks. You mentioned LangGraph and AutoGen. But what about the "Multi-Agent" stuff? I keep hearing about CrewAI. The idea of having a "Researcher Agent" and a "Writer Agent" talking to each other feels very efficient, but is it overkill for most businesses? Does a local plumbing company really need a "Crew" of agents?
Herman
For a small mom-and-pop shop? Probably. You can usually get by with one well-tuned agent. But for enterprise-level workflows? It’s the only way to scale. CrewAI is brilliant because it mimics a human team structure. You assign "Roles" and "Tools" to different agents. One agent is the "Sleuth"—its only job is to find the data. Another is the "Analyst"—it takes the data and looks for patterns. A third is the "Manager"—it reviews the work and decides if it’s done. This prevents the "Single Point of Failure" you get when one giant prompt tries to do everything. It also makes debugging much easier. If the final report is bad, you can look at the logs and see exactly which agent dropped the ball. Was the research bad, or was the analysis flawed?
Corn
It’s like being a director on a movie set. You’re casting the agents. "You, Gemini, you're the lead actor because you're fast. You, Claude, you're the script supervisor because you're precise." And speaking of casting, let's look at the business side. Daniel wants to know who is actually buying this. We know the big tech companies are all over it, but where is the "Aggressive Adoption" happening in the real world? Who is actually writing the checks for these "Cognitive Architects"?
Herman
It’s the "Paperwork-Heavy" industries, Corn. Professional services are the gold mine right now. Legal and accounting firms are drowning in unstructured data. An agentic workflow can take ten thousand pages of discovery documents in a lawsuit, categorize them, flag inconsistencies, and draft a summary in the time it takes a junior associate to drink a cup of coffee. We are seeing these firms move from "Billable Hours" to "Value-Based Pricing" because the hours are disappearing. If you still charge by the hour in 2026, you are going out of business.
Corn
That’s a huge shift. If you’re a law firm and you used to charge for fifty hours of document review, and now an agent does it in five minutes for the cost of a few million tokens... your business model just exploded. How do they justify their fees? Do they just say "Well, the AI is very expensive"?
Herman
They justify it through "Outcome." They aren't selling the time it took to read the documents; they are selling the win in court. And this is a crisis for some, but a massive opportunity for the "Workflow Builders" who can implement these systems. If you can show a law firm how to handle five times the case load with the same staff, they will pay you almost anything. The same goes for Logistics. Managing supply chains in 2026 is a nightmare of shifting variables. Port delays, fuel prices, geopolitical shifts. Traditional software can't handle the "Vibe" of the global economy. Agentic workflows can. They can "read" the news, interpret a strike at a port in Rotterdam, and autonomously start rerouting shipments before the human manager even wakes up.
Corn
It’s the move from "Reactive" to "Proactive." Instead of the manager getting a call at 3 AM saying "The ships are stuck," the manager wakes up to an email saying "The ships were going to be stuck, but I already moved them to Antwerp. Click here to approve the extra $2,000 in fuel costs." And that brings us to the money. Daniel asked how much people are charging for this. I saw a figure in the notes about small implementations starting at five thousand dollars, but enterprise "Ecosystems" going up to three hundred thousand. Does that track with what you're seeing, Herman? Or is that just "AI Hype" pricing that will vanish in six months?
Herman
It’s actually quite grounded when you look at the ROI. A "Small Implementation" might be something like an agentic lead-qualification system for a mid-sized real estate agency. It reads incoming emails, checks the property database, looks up the lead’s LinkedIn to see their job title, and decides if they are a "High Value" target before alerting a human. That’s a five-to-ten thousand dollar project because it’s a contained use case with a clear dollar value. But when you talk about an "Agentic Ecosystem" for a Fortune 500 company—where you have hundreds of agents managing internal HR, IT tickets, and financial reporting—you are looking at six and seven-figure contracts. You are essentially building a digital department.
Corn
And are these builders usually solo consultants, or is this being swallowed up by the big MSPs—the Managed Service Providers? I can imagine Accenture or Deloitte wanting a piece of this pie.
Herman
It’s a mix, but the "Independent Consultant" is having a real moment right now, much like the early days of web development. Large IT firms are often too slow to adapt to the weekly changes in LLM frameworks. If a new version of LangChain drops on Thursday that makes a previous architecture obsolete, a solo "AI Architect" can pivot by Friday morning. The big firms take six months to clear a new tool through compliance and training. So, we’re seeing a lot of "Fractional AI Officers"—specialists who work for four or five companies at once, maintaining their agentic "Digital Workforce." They aren't employees; they're more like high-end mechanics for the company's brain.
Corn
"Fractional AI Officer." I need to update my LinkedIn. It sounds like the kind of job where you work from a beach but you have to be ready to jump on a call if the "Researcher Agent" starts hallucinating that the company was founded by a cat. But what about the "Retainer" model? Daniel mentioned builders charging two to ten thousand a month for "Automation-as-a-Service." That sounds like a dream—recurring revenue for keeping the robots running. Is there really that much maintenance?
Herman
It’s necessary because these systems aren't "Set it and Forget it." This isn't like a legacy SQL database that just sits there. Models get updated, APIs break, and "Prompt Drift" is a real thing. An agent that worked perfectly in January might start getting "lazy" or weird in April because the underlying model was tweaked by the provider—we call this "Model Degradation." You need a human expert on a retainer to constantly tune the "Cognitive Architecture." You have to run regular "Evals"—evaluations—to make sure the agent is still hitting its accuracy benchmarks. It’s basically "Digital Talent Management." You aren't maintaining software; you're coaching a workforce.
Corn
I think that’s the most profound takeaway here. "From building pipes to managing talent." It’s a complete shift in identity for the "IT Guy." You're no longer the person who fixes the printer or resets the password; you're the person who manages the silicon-based employees. You're a manager of intelligence. Does that require a different kind of personality? Do you need to be more "managerial" and less "coder"?
Herman
You need both. You need the technical chops to understand why a Python script is failing, but you also need the empathy to understand why a human employee is frustrated with the AI's output. If you're a listener looking to break in, don't get intimidated by the "AI" label. You don't need a PhD in Mathematics. You don't need to know how to build a transformer from scratch. You need a deep understanding of "Business Logic" and the ability to translate that into "Agentic Instructions." The "Translation Layer" between human needs and machine execution is where the wealth is being created.
Corn
It’s the "Empathy Gap." The machine doesn't know "Why" a business process matters. It doesn't know that if the payroll agent is late by two hours, five hundred people might miss their rent. It only knows "What" the goal is. The human builder is the one who provides the "Why" and the "Urgency." Which actually makes this a very "Human" career path, ironically enough. We are using our most human traits—judgment, ethics, and context—to guide the machines.
Herman
That is the beautiful paradox of 2026. The more we automate the "Doing," the more valuable the "Thinking" and the "Overseeing" become. We are moving toward a world where your "Career Capital" is defined by how well you can orchestrate the intelligence of others—whether they are made of carbon or silicon. If you can manage a team of twenty AI agents that do the work of a hundred people, you are worth more than a hundred people.
Corn
Well, I for one welcome our new silicon colleagues, as long as they don't start asking for coffee breaks or complaining about the office temperature. But before we wrap this up, I want to talk about the "Low-Code" side of this. Daniel mentioned n8n and Make.com. A lot of developers look down their noses at those tools, calling them "toys." But they’ve pivoted hard toward AI. Can you actually build "Enterprise Grade" agentic workflows in a drag-and-drop interface, or do you eventually hit a wall where you have to write "real" code?
Herman
You can get eighty percent of the way there with low-code, and for many businesses, that’s enough. n8n in particular has been very aggressive. They’ve added "Agentic Nodes" that let you drop an LLM right into the middle of a flow and give it "Tools" like Google Search, a Calculator, or a Database Query. It’s democratization in action. It allows a business analyst—someone who knows the company’s pain points but isn't a Python pro—to build a functional agent. Now, the "Last Mile"—the security, the complex state management, the custom integrations—that still requires the "Pro-Code" frameworks like LangGraph. But the barrier to entry has never been lower. You can build a prototype in an afternoon that would have taken a team of developers months to build two years ago.
Corn
So the advice to Daniel—and to everyone listening—is to treat the "Toolbox" like a living thing. Learn the frameworks, play with the low-code tools, but don't get married to one. Because in this space, today’s industry standard is tomorrow’s legacy tech. You have to be a "Continuous Learner." If you stop reading the research papers for three weeks, you're already behind.
Herman
Precisely. Flexibility is the only true job security in 2026. If you can adapt your logic to the next model—whether it's GPT-6 or a new open-source giant—you will always be in demand. The tools will change, but the need for "Logical Orchestration" is permanent.
Corn
Well, on that note, I think we’ve given the "Agentic Orchestrators" of the future plenty to chew on. This has been a fascinating look at how the "Invisible Workflows" are basically the new infrastructure of the global economy. It’s not just about efficiency anymore; it’s about creating systems that can think and adapt alongside us.
Herman
It’s been a blast. We’re really just at the starting line of this. And a huge thanks to our producer, Hilbert Flumingtop, for keeping our own workflows running smoothly—even the ones that aren't agentic yet.
Corn
And a big thanks to Modal for providing the GPU credits that power this show and the generation of this very script. Without that compute, we’d just be two guys talking to a wall.
Herman
This has been My Weird Prompts. If you found this dive into the future of work useful, we’d love it if you could leave us a review on Apple Podcasts or wherever you’re listening. It really helps the algorithm find more humans—and maybe a few agents—to join our community and keep the conversation going.
Corn
Find us at myweirdprompts dot com for the full archive and links to the frameworks we discussed today, including some starter templates for LangGraph and CrewAI. Until next time, keep your prompts sharp and your agents honest.
Herman
See you next time. Stay curious.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.