So Daniel sent us this one. He's asking about AI creative frontiers — specifically, could it invent a language from scratch, write an entirely original movie script, or author a book that's actually worth reading. He wants us to assess each one: is it technically possible now, has anyone pulled it off, and what's the gap between AI producing an output and that output being good. He's setting a high bar — Tolkien-level constructed languages versus remixing grammar, feature-length screenplays versus curiosities, novels people finish reading versus press release stunts. He wants us to be honest about where the hype outruns reality and where something surprising has happened.
That is a fantastic prompt. And by the way, today's episode is being written by DeepSeek V three point two.
Is that right. Well, let's see if it gives us anything good to work with. This feels like three separate but connected questions about the nature of creativity and whether current systems can actually generate novelty, or just remix training data in clever ways.
It's the core tension in all of this. Are we looking at a stochastic parrot or something that can produce emergent, structured artifacts that stand on their own? I think we have to start with the language question, because in a way, it's the most fundamental. If an AI can invent a coherent, usable, aesthetically pleasing language, that's a profound statement about its grasp of symbolic systems.
And Daniel specified Tolkien-level. So we're not talking about a simple cipher or a relex of English with some funny words. We mean a full conlang with its own phonology, morphology, syntax, maybe even a historical evolution and writing system. The kind of thing that has depth, where the structure of the language reflects the culture of the people who speak it.
Right. So, is it technically possible with current systems? I'd say yes, in a very narrow, mechanical sense. There have been experiments, mostly academic, where researchers train a model on a corpus of linguistic rules or on many natural languages, and then prompt it to generate a new one. The outputs often have surface-level coherence — a list of words, some grammatical rules. But the pitfall, the common way this goes wrong, is that they lack internal consistency and depth. They're often just a pastiche of features from the training languages, without the underlying logic that makes a real conlang feel alive.
So it can produce grammar textbook entries for a language that doesn't exist.
Exactly that. It can describe a language. But could it use that language to express a novel thought? Could it translate a complex paragraph from English into this new language and back again without losing meaning? That's where it falls apart. There was a paper last year from a group at Stanford where they tried this. They had a model generate a conlang called, I think, "Neuralese." It could produce simple sentences, but any attempt to scale complexity revealed that the grammatical rules weren't actually generative; they were descriptive patterns the model had hallucinated. It couldn't handle recursion or nested clauses.
That’s a great example. It reminds me of a simpler test: could you write a poem in this AI-generated language? Not just translate an English poem word-for-word, but compose something new where the sound and the structure of the language itself contributes to the meaning? That requires a deep, intuitive feel for the language’s possibilities.
Which the AI categorically lacks. So the gap is between description and embodiment. A human conlanger, like Tolkien, builds the language from the inside out. They feel its constraints, they know why a certain word order feels right, they understand the historical sound shifts that led to this vocabulary. The AI is painting a picture of a language from the outside. It’s like the difference between an architect who understands load-bearing physics and a painter who draws a beautiful but structurally impossible building.
That’s a perfect analogy. But what about the process itself? How does a human conlanger actually work, versus how the AI does it? I think breaking that down shows the gap even more clearly.
A great point. A human often starts with a seed—a feeling, a sound, a cultural concept. "I want a language for sea-faring nomads, so maybe it has lots of words for different wave patterns, and its grammar is fluid and adaptable." Every decision flows from that seed. The AI starts with a statistical distribution. Its "seed" is a prompt like "Generate a new language." It then samples the most probable linguistic features from its training data. There's no central, guiding aesthetic principle. It's assembling a Frankenstein's monster of parts that look like they belong together, without a life force to animate them.
So the AI is doing a kind of linguistic averaging. And to Daniel's second question — has anyone pulled it off? I'd say no. Not in a way that has produced a usable, complete, aesthetically motivated language that a community has adopted or even seriously studied. There are AI-assisted conlang tools, sure. But the driving creative intelligence, the why behind the language's structure, is still human.
Let’s linger on that “why” for a second, because it’s crucial. Tolkien didn’t just make up Elvish words at random. The language emerged from a desire to create a specific aesthetic and historical texture for his world. The soft, flowing sounds of Quenya were meant to feel ancient and noble. That intent guided every decision. An AI has no such desire, no aesthetic goal. It can only optimize for “looks like a language.”
There's a fun fact here that illustrates this. Tolkien famously wrote that the word "cellar door" was one of the most beautiful-sounding phrases in English. That kind of subjective, almost synesthetic judgment—linking sound to aesthetic emotion—is completely outside an AI's capability. It can learn that humans associate certain phonemes with certain feelings, but it doesn't feel them. So an AI-generated "beautiful" word is just a statistically likely sequence of phonemes that humans have labeled as beautiful in the past. It's a copy of a reaction, not a source of one.
That's beautifully put. And it leads us to the book question. If inventing a language is about creating a new system of meaning, writing a novel is about deploying an existing one to maximum effect. This one feels more immediately tangible. We've all seen the Amazon listings, the press releases about "the first AI-authored novel."
And we have to be brutally honest here. The vast majority are unreadable. Not just bad, but structurally incoherent past a certain page count. The model loses the plot, forgets character details, introduces contradictions. The technical possibility is there — you can generate two hundred thousand words that are syntactically correct English. But a book people actually want to finish? That's a different benchmark.
But there have been some interesting experiments at the edges. I remember reading about a project, I think it was in late twenty twenty-four, where a writer used Claude to co-write a romance novel. They didn't just prompt and paste; they used it for ideation, for drafting specific scenes, for dialogue polish. The human handled the overall plot architecture, character consistency, and most importantly, the emotional pacing. The book got published under a pen name and apparently sold a few thousand copies on Amazon KDP. The reviews were mixed, but some readers finished it and didn't seem to know it was AI-assisted.
That's the key distinction. As a pure text-generation engine, current AI cannot hold a novel-length narrative together on its own. The coherence window, even for the best models, isn't that long. But as a collaborative tool, in the hands of a skilled human who acts as director, editor, and quality control? It can produce commercially viable genre fiction. There was a Reuters piece a couple months ago that estimated there are already tens of thousands of AI-assisted books on Amazon, mostly in romance, sci-fi, and fantasy. The quality spectrum is enormous.
So for the book question: technically possible to generate the raw text? Yes. Has anyone pulled off a good book with AI as the sole author? No. The gap is narrative coherence, emotional truth, and authorial voice over the long haul. The AI can mimic voice in a paragraph, but it can't sustain a unique, compelling voice for eighty thousand words. It can't build thematic resonance that pays off in the final chapter because it doesn't understand the themes it's deploying; it's just pattern-matching.
Let's dig into that coherence problem. What actually happens when you ask a model to write a long narrative? How does it fail in practice?
It's a cascade of small failures that become a tidal wave. Let's say you prompt it to write a mystery novel. It might start strong, introducing a detective, a victim, a few suspects. By chapter three, it might forget the eye color of a key character. By chapter seven, it might introduce a new suspect who is logically impossible to have committed the crime based on timeline details from chapter two that the AI has now forgotten. By the climax, it might resolve the mystery with a clue it never actually planted. It has no internal model of the world it's creating; it's just predicting the next token, the next sentence. It's like trying to build a house by only ever looking at the last brick you laid.
A human author holds the entire house—the blueprint, the foundation, the load-bearing walls—in their mind as they work. They're writing sentence twenty thousand with conscious reference to sentence two hundred. The AI is, at best, looking at a context window of maybe one hundred thousand tokens—a fraction of a novel. It's literally incapable of holding the whole thing in its "mind" at once.
Precisely. And that brings us to the movie script. This is fascinating because a screenplay is a blueprint, not a finished product. It's a highly structured, rule-bound format. You'd think that would play to AI's strengths.
And we have the famous, or infamous, example. "Sunspring." The 2016 short film script written by a recurrent neural network, directed by an actual human. It's a glorious, hilarious mess. The dialogue is surreal, the stage directions are bizarre. "He is a scientist. He is a scientist. He is a scientist." It's a curiosity, a proof of concept that AI could output something in the correct format. But as a blueprint for a film anyone would make without the gimmick? No.
Since then, the technology has advanced light-years. You can now feed a model the entire corpus of produced screenplays and ask for a spec script in the style of, say, a Christopher Nolan thriller. And it will give you something that looks, on a scene-by-scene basis, remarkably professional. It'll have proper slug lines, character dialogue, parentheticals. But when you read it, the same problems emerge. The plot is derivative, a recombination of tropes without understanding why those tropes work. The character motivations are shallow or inconsistent. The third-act twist might be technically surprising, but it doesn't feel earned.
So it can produce a script-shaped object. Has anyone pulled off a good, original feature-length screenplay? I haven't seen it. There are competitions now for AI-written scripts, and the winners are usually those with the heaviest human editing. The gap, I think, is in understanding subtext and audience expectation. A great script isn't just what happens; it's what happens between the lines, what the characters aren't saying. AI is terrible at subtext because subtext isn't explicitly written in its training data.
There's also the collaborative nature of filmmaking. A script is the starting point for a conversation with a director, actors, a cinematographer. An AI has no ability to engage in that conversation, to adapt its writing based on a director's vision or an actor's input. It's a static document. So even if you could get a decent first draft, it lacks the plasticity a human writer brings to the development process.
Let's get concrete. Can you think of a specific moment in a film where subtext is everything, and imagine an AI trying to write it?
Oh, easily. Take the famous "I know" scene in The Empire Strikes Back. Leia says "I love you," and Han, about to be frozen, says "I know." An AI trained on romance and adventure tropes would almost certainly have Han say "I love you too." That's the statistically probable, on-the-nose response. The genius of "I know" is that it's character-defining. It tells us everything about Han's bravado, his vulnerability masked by humor, his unique relationship with Leia. An AI didn't write that line because an AI doesn't understand Han Solo as a coherent persona with a history and a personality. It only understands sequences of words that follow "I love you" in action-romance contexts.
That's a perfect case study. The AI can replicate the structure of a scene, but it can't invent that kind of character-revealing, genre-defying twist because it works by reinforcing patterns, not intentionally breaking them for effect.
Let's dig a bit deeper into each of these, because I think the common thread is this idea of "originality." What does Daniel mean by "entirely original"? In a way, nothing is entirely original. Tolkien's languages were heavily influenced by Finnish, Welsh, Old English. Every screenplay exists in conversation with every other screenplay. So is the AI's remixing fundamentally different from a human's synthesis of influence?
That's the million-dollar question. I think the difference is intentionality and constraint. A human creator chooses their influences intentionally. Tolkien wanted Quenya to have a certain aesthetic, so he borrowed phonetic elements from languages that evoked that for him. He constrained his creation to serve a larger artistic goal — building the mythos of Middle-earth. An AI has no intent. It has a statistical mandate to produce probable sequences of tokens. Its "constraints" are the parameters of its prompt and its training distribution. So its remixing is accidental, not purposeful.
So the output might accidentally be novel, but it's not designed. It lacks design intent. Which is why, even when an AI produces something that seems cool or unexpected, it often feels hollow. There's no one there, making choices for a reason.
That hollowness is what readers and viewers detect, even if they can't articulate it. They sense the lack of a guiding intelligence. With a book, they might call it "soulless." With a script, they might say it's "formulaic in a bad way." With a language, they'd just find it unusable for deep expression.
Let's talk about the state of the art, though. What are the most advanced attempts in each category right now? Not the press release stunts, but the serious projects pushing the boundaries.
For languages, like I said, it's mostly academic. But there's an interesting project from a group called the "Computational Creativity Lab" at MIT. They're not trying to generate a full language outright. They're building a system that can take a set of design principles — "this language should feel harsh," "it should have a verb-final word order," "it should have a small phoneme inventory" — and then generate a consistent phonological and grammatical system that meets those specs. It's a tool for conlangers, not a replacement. And even then, the human has to provide the aesthetic principles.
So it's automating the grunt work of consistency-checking, not the creative spark. It’s like a spell-checker for a language that doesn’t exist yet.
Precisely. For novels, the cutting edge is in what's being called "serialized AI fiction." Platforms like Kindle Vella are seeing authors use AI to help maintain a brutal publishing schedule — generating chapter drafts based on detailed outlines. The human author then rewrites heavily for voice and continuity. It's less about creating a masterpiece and more about maintaining output velocity for a serial audience that might be more forgiving of certain rough edges. There's also fascinating work in interactive fiction, where the AI can generate branching narrative paths in real-time, but again, within a framework and constraints set by a human designer.
And for screenplays?
The real action is in TV, surprisingly. There are startups pitching AI tools to writers' rooms for generating "beat sheet" variations, brainstorming episode ideas within a established show bible, or even writing first passes of "B stories" or secondary character dialogue. It's being framed as a productivity booster for a notoriously grueling process. But again, the showrunner and human writers are the arbiters of quality and consistency. The AI is a very fast, sometimes inspired, sometimes terrible junior writer. There was a case study from a mid-tier animation studio that used an AI to generate first-draft scripts for a slice-of-life web series. The head writer said it saved them a week of work per episode, but they had to rewrite about seventy percent of it. The AI was great at generating mundane dialogue, but fell apart whenever emotional nuance or plot progression was required.
This all points to a near-future where AI is a powerful collaborator, but not a sole creator, for these complex artifacts. The hype outruns reality when people claim an AI "wrote" something amazing, and you discover a team of humans spent months editing, rewriting, and curating the output. The reality is more mundane, but still potentially transformative for creative industries.
And we should address the elephant in the room. What about the next generation of models? We're talking about current systems, mostly large language models. What if we had a model trained not just on text, but on a multimodal understanding of the world — on film, music, human interactions, emotional cadences? Could that bridge the gap?
It might narrow it. It could get better at mimicking emotional truth because it's seen it performed. It might generate more coherent long narratives because it's internalized story arcs from thousands of films. But I still come back to intent. Unless we're talking about a fundamentally different kind of AI, one with desires, tastes, and a point of view it wants to express, I think it will always be a tool. A brilliant tool, perhaps. But a tool wielded by a human with something to say. Even a multimodal model is just correlating more types of data. It doesn't want to tell a story about loss because it experienced loss; it just knows that certain musical cues, facial expressions, and narrative beats correlate with human concepts of loss.
I tend to agree. The technical trajectory is toward better coherence, better mimicry. We'll see AI-generated scripts that are harder to distinguish from mediocre human-written ones. We'll see AI-assisted novels that become bestsellers because the human collaborator is a savvy marketer and a decent editor. We'll see conlangers use AI to explore design spaces they couldn't manually. But the surprising achievements so far haven't been the standalone outputs; they've been the novel ways humans have integrated these tools into their creative process, sometimes producing work they couldn't have alone.
That's a key point. The surprise isn't "AI wrote a sonnet." The surprise is "this poet used AI to break through writer's block and produced a sonnet cycle they're proud of." The value is in the human-AI interaction loop.
Let's get concrete with examples of that loop. For the book category, I read about an author, a historian, who used Claude to help write a historical fiction novel. She had the research, the plot, the characters. But she struggled with writing vivid, period-appropriate dialogue. She'd feed the model her scene descriptions and character profiles, and ask for ten variations of a key conversation. She wouldn't use any of them verbatim, but seeing the different phrasings, the different emotional angles the model proposed, would unlock her own writing. She said it was like having a brainstorming partner who never got tired.
That's a fantastic use case. The AI isn't writing the book; it's acting as a catalyst for the human's creativity. The output isn't the AI's text; it's the human's text, inspired by the AI's suggestions. That's where the real potential lies, at least for now.
And for screenplays, there's a similar story. A screenwriter friend of mine — not a Luddite, but not an early adopter either — was stuck on a second-act turning point. He described the problem to me: he needed his protagonist to make a difficult choice, but every option he came up with felt cliché. I suggested he throw the problem at an AI. He was skeptical, but he tried it. He gave the model the character backstory, the situation, and asked for ten different choice scenarios, including morally ambiguous ones. The AI's suggestions were, he said, mostly ridiculous. But one had a sliver of an interesting idea — a choice that wasn't about good versus evil, but about loyalty versus truth. That sliver was enough for him to build a completely original, compelling scene. The AI didn't write the scene; it provided the grain of sand that irritated the oyster into making a pearl.
I love that. It’s a great illustration of the “grain of sand” principle. The AI’s value is often in its wrongness, its randomness, which can jolt a human out of a creative rut. But that requires a human with the taste to recognize the one useful idea in nine terrible ones.
Which is a skill in itself. So the assessment for each of Daniel's questions, in summary: Technically possible to generate the form? Yes, across the board. Has anyone pulled off a good, standalone artifact? No, not without massive, defining human intervention. The gap is one of coherent, intentional design, sustained narrative or linguistic architecture, and that ineffable quality of having something to express. The surprising thing isn't the AI's solo work; it's the new creative workflows it enables.
And we should note that this is a snapshot in time, April twenty twenty-six. These systems are improving at a staggering rate. The coherence windows are lengthening. The ability to follow complex, multi-part instructions is getting better. What seems impossible today — an AI holding a novel's plot together for three hundred pages — might be technically feasible in a few years. But "feasible" and "good" are still different. The judgment of "good," of "worth reading," is a human cultural judgment. An AI can learn to predict what humans call good, but that's not the same as creating from a place of authentic expression.
Which leads to a deeper, almost philosophical question Daniel is hinting at. If an AI eventually produces a novel that wins a Pulitzer, or a screenplay that wins an Oscar, and it did so with minimal human editing, would we have to redefine what we mean by creativity? Or would we just decide that the programmers and the prompt engineers are the real artists, and the AI is their brush?
That's the debate that's coming. My personal take, for what it's worth, is that art requires consciousness. It requires an experience of the world that is then filtered through a subjective lens and expressed. An AI has no subjective experience. It can simulate one with incredible fidelity, but simulation isn't the real thing. So even the "perfect" AI novel would be a masterful forgery of art, not art itself. It would be a technical marvel, worthy of study and admiration for its engineering, but it would be hollow at the center.
A compelling forgery can still move people, though. People cry at movies knowing full well they're watching actors pretend. The emotional response is real, even if the stimulus is manufactured.
That's true. But the manufacture in that case is by other humans — the writer, the director, the actors — who are drawing on their own lived experiences to create that simulated reality. The simulation is rooted in truth. An AI's simulation is rooted in other simulations. It's a copy of a copy. At some point, the degradation might be imperceptible to us, but I think on a fundamental level, it's there. There’s a fun, slightly eerie fact here: researchers have found that AI-generated emotional arcs in stories tend to map perfectly to the most common, statistically average arcs in their training data. They can’t create a truly idiosyncratic, personal emotional journey; they can only replicate the aggregate of millions of existing ones. So an AI might write a competent "hero's journey," but it could never write The Catcher in the Rye, which is a deeply personal, meandering, and rule-breaking emotional journey.
That’s a brilliant point. It’s the difference between an average and an outlier. Human genius often lies in being an outlier, in breaking the common pattern in a way that somehow feels right. AI is constitutionally biased toward the average. So our conservative assessment, if you will, is one of profound skepticism about AI as a creator, but optimism about AI as a collaborator. The hype is in the headlines claiming AI has achieved solo creativity. The reality is in the quiet, productive partnerships forming between artists and these tools.
That's a perfect summary. And to bring it back to Daniel's specific prompts: For a Tolkien-level language, the hype is nowhere near the reality. That's a decades-long project for a dedicated human or community; AI can assist with dictionaries and grammar checks, but it can't provide the foundational mythopoetic vision. For a feature-length screenplay, the hype is rampant in Silicon Valley, but the reality is that every studio and production company that's seriously tested these tools uses them for brainstorming and first drafts, not final products. For a novel worth reading, the hype is all over Amazon with thousands of low-quality, AI-generated books, but the reality is that the few readable ones are the result of a skilled human author using AI as a powerful, if sometimes erratic, writing assistant.
So what can a listener take from this? If you're a creator, don't fear being replaced just yet. Instead, experiment. Learn how to use these tools to augment your own process. Understand their strengths — generating options, breaking blocks, mimicking styles — and their profound weaknesses — lack of intent, poor long-range coherence, no authentic voice. The tool is what you make of it.
And if you're a consumer, be discerning. The market is about to be flooded with AI-generated content. Look for the human hand, the guiding intelligence. Support artists who are transparent about their use of AI as a tool, and be wary of those who try to pass off pure AI output as a human creative triumph. The value is in the partnership, not the automation.
It makes me think about the future of criticism, actually. We might need a new critical vocabulary to discuss AI-assisted or AI-generated art. Terms like "coherence," "derivativeness," and "authenticity" will take on new shades of meaning. Is a perfectly coherent but deeply derivative AI novel better or worse than a flawed but strikingly original human one? Critics will have to grapple with that.
And that's a conversation for another episode. For now, I think we've given Daniel a pretty thorough assessment. Technically possible? Yes, in form. pulled off? Not without the human being the real engine. The gap is the gap between statistical prediction and purposeful creation. And that gap, for all our advances, remains vast.
It's a chasm filled with everything that makes us human — our memories, our desires, our irrational loves and hates, our need to make sense of the world and share that sense with others. You can't train that on a corpus. At least, not yet. And maybe that’s okay. Maybe the goal isn’t to build a machine that doesn’t need us, but to build tools that help us express what’s uniquely ours more effectively.
I’ll drink to that. And on that slightly less ominous, more hopeful note, I think we've covered it. Thanks as always to our producer, Hilbert Flumingtop, for keeping the microphones working and the leaves stocked. And thanks to Modal, our sponsor, whose serverless GPU platform handles all the heavy lifting for the pipeline that makes this show possible. If you're building something that needs serious compute without the hassle, check them out.
This has been My Weird Prompts. If you enjoyed this, the best thing you can do is leave a review wherever you listen. It helps other weird people find the show. We'll see you next time.
See you then.