#1308: The AI Attribution Paradox: Normalizing the Ghostwriter

Why do 70% of developers hide their AI use? Explore the "competence stigma" and the emerging rules for radical transparency in an AI-driven world.

0:000:00
Episode Details
Published
Duration
22:33
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The rapid integration of artificial intelligence into software development has created a "glitch in the matrix" for open-source contributions. While developers are now capable of producing thousands of lines of high-quality code in a single afternoon, a significant gap remains between the use of these tools and the willingness to admit to them. This tension is known as the AI Attribution Paradox: the more we rely on AI to do the heavy lifting, the less we seem to want our names next to it.

The Competence Stigma

Recent research highlights a staggering disconnect in the professional world. While nearly 97% of creative and technical professionals report massive time savings using AI, roughly 70% feel a social stigma if their colleagues find out. This "competence stigma" stems from the fear that using AI suggests a lack of personal skill or effort. In the open-source world, this manifests as a transparency gap: while AI is detectable in a vast majority of code commits, only about 30% of those commits actually credit the AI as a co-author.

This creates a double bind for the modern worker. We want to be seen as high-performers who deliver results at lightning speed, but we don’t want the results to be attributed to a "button press." This shift is redefining what it means to be competent, moving the goalposts from being a solo creator to becoming an effective orchestrator of complex, AI-augmented workflows.

Silent vs. Vocal Tools

The tools themselves play a major role in how this etiquette develops. Some platforms, like GitHub Copilot, are designed to be "silent" partners. They provide the code, the developer hits tab, and the human’s name goes on the commit with no automated trail. In contrast, tools like Claude Code are beginning to insist on transparency by automatically adding themselves as co-authors in the metadata.

Repositories using tools that insist on attribution show significantly higher transparency rates. This suggests that when the burden of disclosure is shifted from the individual to the tool, developers are much more likely to accept AI as a visible part of the process.

Emerging Standards and Legal Mandates

As the "wild west" era of AI coding ends, formal standards are emerging to track the provenance of code. The AIMark project is one such initiative, proposing a markdown-based convention to tag specific blocks of code as AI-generated, human-edited, or debugged by a model. This creates a machine-readable audit trail that allows for a granular understanding of who—or what—did what.

This isn't just a matter of etiquette; it is becoming a legal necessity. The European Union AI Act and new state-level laws in the U.S. are beginning to mandate transparency for AI-generated content. For developers in regulated industries, providing metadata about AI involvement will soon be a compliance requirement rather than a personal choice.

Redefining Authorship

The ultimate goal of these shifting norms is to move from "AI shaming" to "augmentation." Rather than viewing AI as a replacement for human effort, the industry is beginning to see it as an expansion of human intent. By adopting terms like "augmented by AI" and maintaining dedicated attribution sections in project files, creators can maintain accountability and trust without devaluing their own expertise. Transparency, it turns out, is a long-term social strategy that builds peer trust and ensures the reproducibility of work in an increasingly automated world.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1308: The AI Attribution Paradox: Normalizing the Ghostwriter

Daniel Daniel's Prompt
Daniel
Custom topic: The emerging protocol and etiquette of AI attribution. Daniel is personally a huge believer in and fan of radical transparency - at least in his open source work. He commonly includes 'cowritten by AI | Context: ## Current Events Context (as of 2026-03-16)

### Recent Developments

- A February 2026 arXiv paper ("The AI Attribution Paradox: Transparency as Social Strategy in Open-Source Software Development")
Corn
I was looking at a commit history the other day for a fairly popular open source project, and I noticed something that felt like a glitch in the matrix. One developer had written five thousand lines of high quality code in a single afternoon. Now, we all know that is technically impossible for a human being unless they are some kind of caffeinated cyborg, yet the commit message just said, quote, minor refactoring, end quote. It is the classic ghostwriter dilemma. Everyone is using the tech, but almost nobody wants to put their name next to the machine that actually did the heavy lifting. Today's prompt from Daniel is about this exact tension, specifically the emerging protocol and etiquette of AI attribution. Daniel is a huge believer in radical transparency, especially in his open source work, and he often labels his repositories as co written by AI. But he is finding that the rules are shifting under our feet. How do we normalize this without making it look like we are just being lazy or incompetent?
Herman
It is such a timely question because we are seeing a massive divergence between what people are doing and what they are willing to admit to. I am Herman Poppleberry, and I have been diving into some of the research that came out just last month on this. There is a February twenty twenty-six arXiv paper titled The AI Attribution Paradox, and it paints a fascinating picture of the open source world. In early twenty twenty-four, explicit AI attribution in commits was basically non existent. It was near zero. But by late twenty twenty-five, that number jumped to about forty percent. That is a huge shift in social strategy, but it highlights a massive gap. Even with that jump, we are still seeing that in about ninety-five percent of commits where AI tools are clearly detectable, only thirty percent of them actually credit the AI.
Corn
That gap is wild. Ninety-five percent detection versus thirty percent credit? That means sixty-five percent of people are effectively hiding the ghost in the machine. Why is there such a hang up? If the code works and the productivity is through the roof, why are we still treating it like a dirty little secret? We have talked before in episode eleven twenty-one about the contributor paradox and how we value open source contributions, but this feels like it is adding a whole new layer of anxiety to the mix.
Herman
It really is. There is a deep seated fear called the competence stigma. Anthropic actually did some extensive research on this in twenty twenty-five that really quantifies the problem. They found that sixty-nine percent of workers report feeling a social stigma around using AI tools at work. But it gets even more specific for the creative and technical fields. Seventy percent of creative professionals feel stigmatized if their colleagues find out they use AI, even though an incredible ninety-seven percent of those same people report massive time savings and sixty-eight percent report better quality output. It is this double bind where you want to be seen as the person who can deliver results fast, but you do not want people to think the results came from a button press rather than your own brain.
Corn
It is like the early days of spell check or even calculators. There was a time when using a calculator for math was considered cheating because you were not doing the mental arithmetic. Now, if you do not use a tool to verify your work, you are considered irresponsible. But AI feels different because it is not just checking our work; it is generating the substance. When Daniel says he wants to be radically transparent, he is fighting against that instinct to take all the credit. But I wonder if the tools themselves are making that harder or easier. You look at something like GitHub Copilot, and it is completely silent. It does the work, you hit tab, and your name goes on the commit. There is no automated trail.
Herman
And that is a deliberate design choice by some companies. Contrast that with Claude Code, which actually has a very different philosophy. Claude Code automatically adds itself as a co-author in the commits it generates. It is a built in attribution mechanism. This creates two very different sets of norms in the same developer community. If you use one tool, you are encouraged to be a ghostwriter. If you use the other, the tool basically insists on a co-author credit. It forces the developer to decide whether they are going to delete that attribution or let it stand as a mark of transparency. According to that February twenty twenty-six paper, Claude-assisted repositories actually show the highest transparency rates, with sixty to seventy-seven percent of commits explicitly crediting AI.
Corn
I like the idea of the tool insisting on it. It takes the pressure off the human to make a moral stand every single time. But then we hit the granularity problem. If I use Claude to write a single unit test, does that deserve a co-author credit? If it refactors a hundred lines of boilerplate but I designed the entire architecture, is co-written by AI an accurate description? Daniel mentioned he is often uncertain about the wording. Should it be assisted by AI, generated by AI, or co-authored? We touched on the evolving definition of open source back in episode six seventy-eight, but the threshold for what constitutes authorship is still so fuzzy.
Herman
That is where the AIMark project comes in. If you look at the lost-books slash AIMark-AI-attribution repository on GitHub, they are trying to solve that exact problem with a technical standard. It is a draft proposal for inline attribution. Instead of just a vague comment at the top of a file, it uses a markdown based convention to tag specific blocks of text or code. You could have a block tagged as AI-generated and another block tagged as human-edited. It is an attempt to create a machine readable audit trail of who did what. It allows you to specify the model, the percentage of content, and the specific tasks like drafting versus debugging.
Corn
That sounds like a dream for auditors but a bit of a nightmare for a developer who just wants to get things done. I mean, are we really going to start tagging every line of code with its provenance? We already have git blame for that, but even git blame is becoming less useful when a single human user is essentially a proxy for three different large language models. But maybe the nightmare is necessary. If we do not have standards, we just have chaos and hidden use.
Herman
Well, the push for this is not just coming from a place of developer etiquette. It is becoming a legal requirement. The European Union AI Act has transparency provisions that are entering broad enforcement this year, in twenty twenty-six. It requires labeling AI-generated content of public interest in machine readable formats. So, if you are working on software that has a public impact, or if you are in a regulated industry, you might not have a choice. You are going to have to provide that metadata. We are also seeing state-level movement here in the US. California passed a law requiring AI providers to publish summaries of training data sources, and New York now mandates disclosure in commercial ads using AI-generated synthetic performers.
Corn
This reminds me of the conversation we had about the shift in how we value contributions. If we start mandating AI attribution, does that further devalue the human contribution? If I am a junior dev trying to build a portfolio and forty percent of my commits say assisted by AI, does a hiring manager look at that and think I am less skilled than the person who just lied about it? A study by KPMG and the University of Melbourne found that fifty-seven percent of employees globally admit they have hidden their AI tool use at work. That is more than half the workforce lying to their bosses.
Herman
That is the risk, and it is why the February twenty twenty-six paper called it a paradox. Transparency is a social strategy. The researchers found that developers who were more transparent actually saw an increase in trust from their peers in the long run. It signals that you are not just a script kiddie pasting code you do not understand, but a manager of a complex workflow who is confident enough to show their work. It shifts the definition of competence from being a solo coder to being an effective orchestrator. However, a twenty twenty-five paper titled Which Contributions Deserve Credit found that knowledge workers consistently assigned AI less credit than an equivalent human contributor for the same quality of contribution. So the stigma is real and it is measurable.
Corn
I think that is the key psychological shift we need to make. We have to stop seeing AI as a replacement for effort and start seeing it as an expansion of intent. If I spend three hours prompting and refining a piece of code, I have put in the work. The work just looks different than it did five years ago. But academia seems to be the place where this is getting really messy. I saw a paper from twenty twenty-four that coined the term AI shaming. Apparently, researchers are being ostracized for even using AI for grammar correction.
Herman
Academia is definitely the most restrictive frontier right now. You have this massive divide between institutions. The International Committee of Medical Journal Editors, or the I-C-M-J-E, updated their framework in twenty twenty-five to require explicit disclosure of the AI's role. They do not allow AI to be an author, but they want you to describe exactly what it did. On the flip side, journals like Science, published by the A-A-A-S, have been incredibly restrictive, basically treating AI generated text as scientific misconduct and banning it entirely. It is a total ban versus radical transparency.
Corn
A total ban seems like a losing battle. It is like banning the use of the internet for research. Eventually, the productivity gap between those who use it and those who do not becomes so wide that the ban just becomes a tax on honesty. If you are a scientist and your peers are using AI to synthesize data and write drafts while you are doing it all by hand, you are going to get left behind. So you use the tool, and then you just do not tell anyone. That is a recipe for a reproducibility crisis. If we cannot see the prompts or the models used, how can we ever hope to replicate the results?
Herman
And that is why Daniel's instinct toward openness is actually the more conservative and responsible path. If we do not have a standard for attribution, we lose the ability to audit the work. If a model has a specific bias or makes a specific type of error, and we do not know that model was used to write a piece of medical software or a legal brief, we cannot fix it. Attribution is about accountability as much as it is about credit. We are seeing grassroots tooling emerging to help with this, like the git-ai project. It is a Git extension specifically designed for tracking AI-generated code in repositories. It automates the audit trail so the developer doesn't have to manually tag everything.
Corn
Let's talk about the mechanics then. If someone listening wants to follow Daniel's lead and be transparent, what does the etiquette actually look like in twenty twenty-six? Is there a middle ground between a vague co-written by AI and a line-by-line metadata audit? Daniel is looking for the right wording.
Herman
There is an emerging norm around the word augmented. Instead of saying this was written by AI, which implies the human was passive, the suggestion is to say this was augmented by AI. It is a subtle shift in language, but it places the human in the driver's seat. You are the author; the AI is the augment. Another practical takeaway is to use a dedicated AI attribution section in your readme files. This is becoming a standard in open source. Just a simple section that lists the models used, the specific tasks they performed, and the date. It tells the user, look, I used Claude for the logic and Gemini for the documentation. It shows you have a thought out stack rather than just a random collection of prompts.
Corn
I like that. Augmented output. It sounds more professional and less like you just outsourced your brain. It also covers the spectrum of use cases. If I use a model to help me brainstorm an architecture, that is augmentation. If it writes the boilerplate, that is also augmentation. It acknowledges the partnership without surrendering the ownership. But what about the feeling of being lazy? That is what Daniel mentioned, that nagging feeling that admitting AI help is admitting you are not doing the work.
Herman
That is the competence stigma we talked about. But think about it this way: is it lazy to use a crane to lift a steel beam? No, it would be stupid to try and lift it with your back. If the beam needs to go up, you use the crane. The skill is in knowing where the beam goes and how to operate the crane safely. The intellectual labor has shifted from the manual construction to the architectural design. The laziness argument only works if you assume that the value of work is tied to the amount of suffering or time spent. But in a professional setting, the value is in the outcome and the reliability.
Corn
But we have to admit, there are people who are just using it to be lazy. They are doing the one-shot prompt, not checking the output, and shipping it. That is where the stigma comes from. It is the low quality AI slop that gives the high quality AI augmentation a bad name. So maybe the protocol should include a statement of verification. Something like, AI-assisted, human-verified.
Herman
That is actually part of the twenty twenty-five I-C-M-J-E guidelines. They emphasize that humans are ultimately responsible for the content, regardless of how it was generated. So a transparent attribution should always imply, or explicitly state, that the human has audited the work. If you say, I used an AI to write this and I stand by every word, that is a very different statement than saying, the AI wrote this and I haven't really looked at it. It is about the level of oversight.
Corn
It is funny, we have had ghostwriters for centuries in politics and celebrity memoirs, and we never had this level of hand wringing. We just accepted that the person on the cover didn't type every word. But with AI, we feel like we are losing something fundamental about human creativity. Maybe it is the speed. A ghostwriter still takes weeks or months. An AI takes seconds. That speed makes it feel like magic, and magic feels like cheating.
Herman
There is also the issue of the training data. When you use a human ghostwriter, you are paying for their expertise, which they gained through their own life. When you use an AI, you are using a model that was trained on the collective output of millions of other humans, often without their consent. That adds an ethical layer to the attribution question. By being transparent about the AI use, you are also acknowledging the broad human heritage that the model is drawing from. It is a way of being honest about the supply chain of your ideas.
Corn
That is a deep way to look at it. It is not just about you and the machine; it is about you, the machine, and the entire corpus of human knowledge. Which brings us to the future of policy. You mentioned the EU AI Act, but what is happening on this side of the pond? I know the F-C-C has been making some noise about this lately.
Herman
They have. The F-C-C is opening a proceeding this year to consider a federal standard for AI disclosure. They are worried about a patchwork of state laws like the ones in California and New York. If the F-C-C creates a federal standard, it could preempt all of that. We might end up with a universal icon or a standard bit of metadata that has to be attached to any AI-influenced content. It would be a standardized way to signal the provenance of the work across all platforms.
Corn
A universal icon would be interesting. Like a nutrition label for your code or your articles. High AI content, low human oversight. Or a hundred percent organic human thoughts. I can see the marketing already. But honestly, I think the grassroots stuff is more important. The norms are set by people like Daniel who are willing to be the first ones to put the co-written by AI tag on their work. They are the ones who make it safe for everyone else to be honest.
Herman
I agree. The technology is moving so much faster than the policy. If we wait for the F-C-C or the EU to tell us how to be honest, we will be waiting forever. The best thing we can do right now is to adopt a transparency first policy. If you used it, say you used it. Define the role it played. Did it help you research? Did it help you draft? Did it help you debug? Being specific actually makes you look more like an expert because it shows you know exactly how to use the tool for specific tasks. It moves you from being a passenger to being the driver.
Corn
It also helps other people learn. If I see a project I admire and the author says they used a specific model with a specific prompting technique to handle the data visualization, that is a huge learning opportunity for me. It is the ultimate form of open source. You are not just opening the source code; you are opening the source process. Radical transparency is the logical conclusion of the open source movement. It is about sharing the how, not just the what.
Herman
That is a brilliant point. If we hide the AI, we are hiding the most important part of the modern development process. We are keeping the best tools for ourselves while pretending we are still doing it the old way. That is not just dishonest; it is counterproductive to the community.
Corn
So, for the listeners who are sitting there with a half finished project that they've been using Claude or Gemini to help with, what is the first step? How do they start this conversation with their team or their community?
Herman
I would say, start by adding a small acknowledgment in your next commit or your next project readme. Use the word augmented if you are worried about the stigma. Something like, this project was developed with the assistance of AI tools for testing and documentation. It is a low stakes way to start normalizing the behavior. And if anyone asks, be ready to talk about the benefits. Talk about the time you saved and the bugs you caught. Turn it into a conversation about quality and efficiency rather than a confession of laziness.
Corn
And maybe check out those tools we mentioned. The git-ai extension and the AIMark project. Even if you do not use them, looking at how they are structured can give you a good vocabulary for how to describe your own work. It is about building that muscle of disclosure until it feels as natural as adding a license file to your repo.
Herman
What I find really interesting is that we are essentially witnessing the birth of a new kind of literacy. In the past, being literate meant knowing how to write. In the future, being literate might mean knowing how to collaborate with a machine and how to clearly communicate the nature of that collaboration. The people who are transparent about it now are the ones who are going to be ahead of the curve when these norms finally solidify.
Corn
It is like the difference between someone who can drive a car and someone who can explain how the internal combustion engine works. One is a user, the other is an expert. By being transparent, you are signaling that you are the expert who understands the engine. You are not just a passenger.
Herman
I think that is the perfect way to frame it. You are not a passenger in your own career. You are the driver, and the AI is just a very advanced navigation system and a powerful engine. You still have to decide where to go and how to handle the corners. Transparency isn't about admitting laziness; it's about documenting the provenance of the work and taking responsibility for the final output.
Corn
This has been a really enlightening look at something that I think a lot of us are feeling but haven't quite found the words for. It is that tension between wanting to be the best and wanting to be the most honest version of ourselves. It turns out, in twenty twenty-six, those two things might finally be the same thing.
Herman
It is a brave new world for attribution. And honestly, I think it is going to make the work better. When we stop hiding how we work, we can start improving how we work together.
Corn
Well said. I think we have covered a lot of ground here, from the social stigma to the technical standards to the legal landscape. The takeaway is clear: transparency is not a sign of weakness; it is a mark of professional maturity.
Herman
It really is. And it is something we should all be striving for, whether we are working on a small open source project or a major commercial application.
Corn
Alright, I think that is a wrap on this one. It is a lot to think about next time you are staring at a blinking cursor and a chat window.
Herman
I will definitely be thinking about how I label my next batch of research notes.
Corn
As always, thanks to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the GPU credits that power this show. We literally couldn't do this without them.
Herman
This has been My Weird Prompts. We really appreciate you guys tuning in and exploring these weird corners of the tech world with us.
Corn
If you found this episode helpful, or if you have your own thoughts on how to handle AI attribution, we would love to hear from you. You can find us at myweirdprompts dot com for the RSS feed and all the different ways to subscribe.
Herman
Or search for My Weird Prompts on Telegram to get notified the second a new episode drops. We are always looking for new perspectives on these topics.
Corn
Until next time, I'm Corn.
Herman
And I'm Herman Poppleberry. Thanks for listening.
Corn
See ya.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.