Here's what Daniel sent us this week. He's asking us to trace the full arc of labor history through three eras: the Industrial Era, the Knowledge Economy Era, and what he's calling the AI-Accelerated Workforce Era. The central question he wants us to dig into is: when did the knowledge economy truly begin? Because it turns out there's no clean answer — there are at least four defensible birth dates depending on how you look at it. And the reason this matters isn't just historical trivia. It's because understanding how the last great labor transformation unfolded tells us something important about the one we're living through right now.
I'm Herman Poppleberry, and I have been looking forward to this one. The history of labor transitions is genuinely one of those topics where the closer you look, the more surprising it gets. And the "when did the knowledge economy begin" question is a perfect example — most people assume there's an obvious answer, and there really isn't.
Before we dive in — quick note that today's script is brought to life by Claude Sonnet 4.6, our AI collaborator of the moment. Alright, let's go back. Because to understand why 1956 or 1959 or whenever matters, you have to understand what the Industrial Era actually was. And I think most people have a sanitized version of it.
The sanitized version being: factories appeared, productivity went up, everyone got jobs. The reality was considerably more brutal. We're talking about children as young as four years old working in factories. By the mid-1800s, child laborers in British textile mills were clocking eleven to sixteen hours a day, five or six days a week, earning as little as fifty cents a day — roughly one-eighth of what an adult doing the same work made. Friedrich Engels documented this in exhaustive, horrifying detail in 1844. The 1833 Factory Act was one of the first attempts to impose any limits at all, and even then enforcement was minimal.
And what's striking is that before this, work wasn't centralized at all. Pre-industrial work was fundamentally domestic — farms, cottages, workshops. The factory wasn't just a new location for work. It was a new conception of what work meant. You were no longer working to meet your own household's needs or your community's needs. You were a unit in someone else's production process.
Which is exactly where Frederick Winslow Taylor comes in. Taylor is one of those figures who is either the villain or the hero of this story depending on your perspective, and he was genuinely both. His 1911 book, The Principles of Scientific Management, laid out what he called the "scientifically best way" to do any given task. You break every job into its smallest components, you time each component, you eliminate wasted motion, and you optimize. His four principles were essentially: find the one best method, match workers to tasks by capability, supervise closely, and divide planning from execution — managers think, workers do.
Which sounds reasonable until you realize the implication: workers aren't supposed to think. They're elements of the production process. Taylor himself increasingly talked about workers in those terms. And the labor movement hated him for it, understandably. There were congressional hearings about Taylorism. Socialists attacked it for turning craftsmen into automatons.
But here's the number that makes it impossible to simply dismiss: Peter Drucker later calculated that Taylorism produced a fifty-fold increase in the productivity of the manual worker over the twentieth century. Fifty times. That is an almost incomprehensible gain. Whatever the human cost of the system, it generated material wealth on a scale that had no historical precedent. Drucker called it the defining management achievement of the entire era.
So the Industrial Era at its peak is this paradox — enormous productivity gains built on a system that explicitly treated human beings as optimizable components. And then something starts to shift. The question is when.
This is where the "when did the knowledge economy begin" problem gets genuinely interesting, because there are four credible answers and they're not in competition — they're measuring different things. The first and most concrete is 1956. That's the year white-collar workers in the United States first outnumbered blue-collar factory and manual laborers. Not a theory, not a book — a census fact. The economic center of gravity had already shifted before anyone had a name for what was happening.
Which is a fascinating inversion of how we usually think about intellectual history. We tend to assume ideas precede reality — someone has a concept, and then it shapes the world. But here the demographic transformation happened first, and the frameworks came after.
Three years after, to be precise. In 1959, Peter Drucker coined the term "knowledge worker" in his book Landmarks of Tomorrow. His definition was specific: high-level workers who apply theoretical and analytical knowledge, acquired through formal training, to develop products and services. Not just anyone with a desk job — someone whose primary tool is what they know rather than what they can physically do.
And Drucker had this insight that I think is underappreciated even now: knowledge workers own their means of production. In the Marxist framework, the great power imbalance of industrial capitalism was that workers didn't own the machines — the capitalists did. But a knowledge worker's means of production is the knowledge in their head. It's portable, it's theirs, it travels with them when they quit. That fundamentally changes the power dynamic between employer and employee.
Which is why knowledge work was so hard to manage by industrial methods. You can't apply Taylorist optimization to a lawyer preparing a brief or a researcher designing an experiment, because the work is inherently non-decomposable in the same way. Or at least, it seemed that way. Drucker in 1999 — forty years after coining the term — wrote what I think is his most important paper, Knowledge-Worker Productivity: The Biggest Challenge. And his central point was that management had solved the productivity problem for manual work, thanks to Taylor, but had barely scratched the surface for knowledge work.
And the question he said you had to ask was different. Taylor's question was "how does the worker best do the job?" Drucker said the knowledge worker question is prior to that: "what is the task?" Because knowledge workers often define their own tasks. A factory worker doesn't decide what to manufacture. A consultant decides what problem to solve.
Meanwhile, in 1962, Fritz Machlup at Princeton put the first hard economic numbers on any of this. His book The Production and Distribution of Knowledge in the United States is one of those works that almost nobody reads anymore but that was genuinely shocking at the time. He introduced the concept of the "knowledge industry" — education, research and development, media, information technology — and found that twenty-nine percent of U.S. gross national product was already being generated by knowledge industries as of 1959. Twenty-nine percent. At a time when most people still thought of America as primarily a manufacturing economy.
That number is remarkable. Because it means the knowledge economy was already nearly a third of the entire American economy before anyone had coined the phrase "knowledge economy." It was hiding in plain sight.
Then in 1973, Daniel Bell synthesized all of this into a comprehensive sociological theory with The Coming of Post-Industrial Society. Bell's five dimensions are worth knowing: the shift from manufacturing to services, the pre-eminence of the professional and technical class, the centrality of theoretical knowledge as the organizing principle of innovation, the planning and assessment of technology, and the rise of what he called intellectual technology — decision theory, systems analysis, that kind of thing. Bell also noted that by the time he was writing, fifty percent of the U.S. labor force and fifty percent of gross national product already involved service work.
So you've got four birth dates: 1956 when it happened demographically, 1959 when it was named, 1962 when it was measured, and 1973 when it was theorized. My instinct is that 1956 is the most honest answer, because it's the one that doesn't require a human observer to notice it. The economy changed whether or not anyone wrote a book about it.
There's a strong case for that. Though I'd argue Drucker's contribution in 1959 wasn't just naming something — it was identifying a management problem that would take another sixty years to solve. Or perhaps be solved in a way he never anticipated.
We'll get to that. But first — the arc from 1956 to the present. Because the numbers tell a story that's almost hard to believe. In 1900, less than twenty percent of the American workforce was white-collar. More than thirty percent derived primary income from farming. By 1989, only three million of a hundred and seventeen million workers were in farming. Thirty million were in managerial and professional positions. Fifty-one million in technical support and service jobs. And by 2005, over eighty-one million Americans were in the service sector, with roughly eighty percent of GDP coming from services and intangibles — entertainment, education, healthcare, financial services.
That transformation happened in roughly a century, which sounds slow until you compare it to what's happening now. And the mechanism matters: the knowledge economy expanded partly because of the personal computer revolution in the early 1980s, which suddenly allowed knowledge work to be done at a distance, to be scaled, to be distributed. The baby boom generation — the most educated and affluent cohort in American history up to that point — created enormous demand for knowledge-based services. The service sector's share of GDP rose from around thirty-five percent to fifty-five percent between the 1970s and 1980s alone.
And this is where I want to introduce what I think is the most interesting irony in this whole story. For decades, the conventional wisdom was that knowledge work was the safe harbor. The Industrial Revolution displaced farmers and factory workers. The knowledge economy was supposed to be immune. You got a college degree, you became an analyst or a consultant or a lawyer, and you were protected from automation. That was the deal. And it was a deal that held — until it didn't.
The deal held for about sixty years, which is long enough that it became received wisdom. And then ChatGPT launched in November 2022, and within about eighteen months it became clear that the received wisdom was wrong. The disruption this time isn't aimed at physical labor — it's aimed at exactly the class of workers who thought they were untouchable.
Dario Amodei said last May that AI could wipe out half of all entry-level white-collar jobs and drive unemployment to somewhere between ten and twenty percent within one to five years. Jim Farley at Ford said AI would eliminate literally half of all white-collar workers in a decade. These aren't fringe predictions — these are the people building and deploying the technology.
And the hard data backs the direction if not the precise magnitude. MIT research found that current AI systems could already take over tasks tied to eleven-point-seven percent of the U.S. labor market — about a hundred and fifty-one million workers representing roughly one-point-two trillion dollars in pay. Goldman Sachs put the global figure at three hundred million full-time job equivalents potentially automatable by generative AI. McKinsey found that seventy-eight percent of organizations are already using AI in at least one business function. These numbers are not projections about some distant future — they're snapshots of where things stand right now.
Here's the thing that I keep coming back to, though. The comparison to the Industrial Revolution is everywhere in this discourse, and I think it's partially right and partially misleading. The Industrial Revolution took roughly a century to fully transform the labor market. The knowledge economy transition took about sixty years. If the pattern holds, the AI transition should be faster — but how much faster?
That's the right question, and the honest answer is we don't know. What we do know is that the pace of AI capability improvement is unlike anything in previous technological transitions. The railroad didn't improve by an order of magnitude every two years. Steam engines didn't. But AI models have been improving at a rate that consistently surprises even the people building them. Marco Argenti, the Goldman Sachs CIO, said in January that 2025 saw the biggest changes in technology he'd witnessed in forty years of working in the field — and that 2026 would be even bigger.
Which brings us to agentic AI, because I think this is where the current moment gets genuinely strange. The first wave of generative AI was essentially a very capable autocomplete — useful, impressive, but fundamentally a tool you had to direct. Agentic AI is something different. These are systems capable of independent planning and execution across complex, multi-step tasks. You don't just ask them a question. You give them a goal.
The Goldman Sachs prediction here is striking: companies will shift from deploying human-centric staff to deploying what Argenti called "human-orchestrated fleets of specialized multi-agent teams." And here's the economic implication that I think most people haven't fully absorbed — instead of billing clients by hours worked, these hybrid human-AI teams will charge by tokens consumed. The units of data processed by AI models. This is as fundamental a restructuring of how labor is valued as the shift from piece-rate to hourly wages during the Industrial Revolution.
The hour economy to the token economy. That's a genuinely profound reframing. For centuries, time has been the unit of labor. You sold your hours. The knowledge economy shifted somewhat toward outcomes — you billed for results — but the underlying unit was still often time. Billing by tokens consumed is something else entirely. It's billing by computational resource, which is almost completely decoupled from human effort.
And this connects back to Drucker in a way that I find almost poignant. Drucker said knowledge workers own their means of production — their knowledge. That was the source of their leverage. AI is now replicating that knowledge at scale. If you can train a model on the accumulated output of ten thousand lawyers or ten thousand analysts, and then deploy that model at near-zero marginal cost, the "means of production" argument collapses. The knowledge isn't portable and unique anymore. It's been externalized.
Which is essentially Taylorism applied to knowledge work. And I mean that quite literally. Taylor's insight was that you could decompose manual work into discrete, measurable, optimizable tasks and strip the craft knowledge out of individual workers by encoding it into systems and processes. For a century, knowledge work was exempt from that because it was too complex and contextual. AI is now doing exactly that decomposition — not by writing procedures and time-motion studies, but by learning from enormous datasets. The knowledge worker is being Taylorized.
The question Taylor's critics asked about factory workers is now the question being asked about knowledge workers: is this liberation or subjugation? Taylor's defenders said scientific management freed workers from backbreaking inefficiency and raised wages through productivity gains. His critics said it turned craftsmen into automatons and stripped dignity from work. Both things were true simultaneously. I suspect the same will be true of AI and knowledge work.
What I find underappreciated in most of this coverage is the entry-level problem. Because there's a specific structural consequence of AI targeting knowledge work that has no real historical precedent. The entry-level analyst, the junior associate, the research assistant, the paralegal — these roles weren't just cheap labor. They were how young people learned their professions. The bottom rungs of the career ladder. If AI is now performing those tasks, you remove the mechanism by which the next generation of senior professionals develops. You can't fast-track someone to senior partner if they've never done the analytical grunt work that teaches you how to think in that domain.
This is one of the most genuinely difficult structural problems, and I don't think anyone has a good answer yet. The WEF's Future of Jobs Report from last year projected a net increase of seventy-eight million jobs by 2030, but sixty-three percent of employers cited skill gaps as the biggest barrier. By 2030, nearly forty percent of workers' core skills will change dramatically or become obsolete. And PwC found that forty-four percent of core worker skills have been disrupted in just the last twenty-four months.
The skills half-life has collapsed. In the Industrial Era, a skill learned in your twenties could carry you through an entire career. In the knowledge economy, you needed to update every decade or so. Now we're talking about skills becoming obsolete in months.
Which creates a fundamentally different relationship between workers and learning. It's not "upskill once and you're set" — it's continuous, urgent, and existential. The WEF identifies the top three fastest-growing skills as AI and big data, network and cybersecurity competency, and technological literacy broadly. But what's interesting is the second-order skill they identify: what Goldman Sachs calls strategic prompting and orchestration — the ability to manage multiple AI agents simultaneously. That's a skill that didn't exist three years ago and is now among the most valuable in the market.
The fifty-six percent wage premium for AI-fluent professionals is a number worth sitting with. That's not a marginal advantage. That's the difference between a comfortable career and a struggling one. And it mirrors what happened in the early knowledge economy, where college-educated workers pulled dramatically ahead of those without degrees. The new credential isn't a degree — it's demonstrated AI fluency.
And there's a geopolitical dimension here that doesn't get enough attention. The OECD has flagged a growing digital sovereignty gap — roughly seventy-five percent of organizations now prioritize data control and security over speed of innovation. The knowledge economy was genuinely globalized. You could hire a brilliant analyst in Bangalore for a fraction of the cost of one in New York, and the work was equally good. The AI economy may actually re-centralize around whoever controls the foundation models. If the model is in San Francisco and the compute is in Texas, the economic value increasingly flows to those locations, not to wherever the human workers happen to be.
That's a reversal of one of the defining trends of the last thirty years of globalization. The knowledge economy distributed economic opportunity in ways the Industrial Era never did. The AI era might concentrate it again.
Which brings me to what I think is the most important practical question: what does this mean for how individuals should think about their own careers? Because the historical arc we've traced — from factory floor to knowledge worker to AI-augmented professional — isn't just intellectual history. It's a map of where we are and where we might be going.
The Drucker insight I keep returning to is the question shift. Taylor asked "how does the worker best do the job?" Drucker said the knowledge worker question is "what is the task?" I think the AI-era question is different again: "what is the judgment?" Because if AI can do the task, what remains distinctively human is the judgment about which tasks matter, what the goal should be, how to evaluate the output, what ethical constraints apply.
The Goldman Sachs CIO put it well: the workers who thrive will be those with expertise who are also the most willing to adapt. And specifically — and this is the part I find genuinely insightful — the differentiator will be the ability to reimagine what your job is in an age where AI is doing significant portions of it. Not just learning new tools, but reconceiving the role itself.
Which is actually a harder cognitive task than learning a new software package. It requires a kind of meta-awareness about your own work — what the actual value-generating components are, as opposed to the procedural components that AI can now handle. Most people, if you ask them what they do, describe the procedures. The people who will navigate this well are the ones who can answer in terms of the judgment, the relationships, the synthesis.
There's also a real conversation to be had about what institutions need to do. The WEF's net positive job projection of seventy-eight million by 2030 is contingent on massive investment in retraining and education. The current trajectory, where fifty-five thousand jobs were cut last year by companies directly citing AI as a driver, is not obviously leading to that outcome without significant policy intervention. The question of who bears the cost of continuous retraining — individuals, employers, or governments — is one of the central labor policy questions of the next decade.
And historically, that question has been answered badly during major transitions. The workers displaced by the first Industrial Revolution didn't have elegant retraining programs. They had decades of grinding poverty while the economy restructured. The knowledge economy transition was smoother partly because it happened slowly enough that the education system could adapt. The AI transition may be happening too fast for institutions to keep up.
The OECD figure of twenty-seven to twenty-eight percent of jobs at high risk of automation is worth contextualizing. In the first Industrial Revolution, estimates suggest roughly a third of the agricultural workforce was displaced over a century. We might be looking at a comparable scale of displacement in a fraction of the time. Whether the new jobs materialize fast enough — and whether they're accessible to the people who need them — is genuinely uncertain.
What I take from all of this, tracing the full arc from the factory floor to where we are now, is that every major labor transition has involved a period of genuine disruption and dislocation before the new equilibrium emerged. The Industrial Revolution was brutal for a generation of workers before the gains became broadly shared. The knowledge economy created enormous prosperity but also enormous inequality. There's no reason to assume the AI transition will be smoother than either of its predecessors.
The one thing I find genuinely hopeful — and I want to be clear that I'm not dismissing the real disruption that's happening — is that productivity gains of the kind we're talking about are historically associated with eventual rises in living standards. Taylor's fifty-fold productivity increase in manufacturing, as brutal as the implementation was, is part of why the twentieth century saw the largest sustained improvement in material living standards in human history. If AI delivers even a fraction of the productivity gains being projected, the question is whether we build institutions capable of distributing those gains broadly.
Which is a political and social question more than a technological one. The technology is going to do what it's going to do. The distribution question is up to us.
Drucker said in 1999 that increasing knowledge worker productivity was the most important contribution management needed to make in the twenty-first century. He was right about the challenge. He couldn't have imagined that the answer would come in the form of systems that don't just assist knowledge workers but in some domains replace them. The question he didn't live to ask is: what do you optimize for when the worker is no longer the bottleneck?
That might be the defining question of the next twenty years. Alright — if you've been thinking about this stuff and want to go deeper, this is a topic worth following closely. The pace of change means that what's true today may look different in six months.
Big thanks to Modal for the GPU credits that keep this whole operation running. And thanks as always to our producer Hilbert Flumingtop.
This has been My Weird Prompts. Find us at myweirdprompts dot com for RSS and all the ways to subscribe. We'll see you next time.