Herman, I was watching you work earlier and it looked like you were trying to perform surgery on two different laptops at the same time. You had one screen for the public version of your code and another for the private deployment scripts, and you looked about three seconds away from a very expensive mistake.
It is a high-wire act, Corn. My name is Herman Poppleberry, and I am currently a victim of what we call the dual-repo tax. It is that mental and technical overhead you pay when you try to be a good open-source citizen while also keeping your actual production infrastructure a secret. It is exhausting, it is error-prone, and frankly, it is killing my velocity.
Well, today's prompt from Daniel is about exactly that struggle. He is a huge advocate for open-sourcing code and ideas, but he is tired of the cumbersome process of maintaining parallel repositories. He wants to know how to manage a single repository while keeping local variations and sensitive deployment logic private, and he specifically said he wants to go beyond just sticking things in an environment file.
Daniel is hitting on a massive pain point in the industry right now. The old way of doing things, where you just have a dot env file and hope you remember to add it to your git-ignore, is basically the digital equivalent of leaving your house keys under the welcome mat. It works until it really, really doesn't. We are talking about a fundamental tension between the transparency required for open-source collaboration and the strict privacy required for secure deployment.
And based on the numbers I have been seeing lately, it is not working for a lot of people. I saw a report from GitGuardian that came out earlier this month, the State of Secrets Sprawl twenty twenty-six report, and the numbers are staggering. They found that twelve point eight million secrets were leaked in public repositories just in twenty twenty-five. That is a twenty-two percent increase from the year before. Even with all our modern tools, we are leaking more than ever.
That is exactly why the dual-repo approach became so popular. People are terrified of being part of that twelve million. They think if they just keep the sensitive stuff in a separate, private repository, they are safe. But then you run into the split-brain problem. You update a feature in the public repo, but you forget to update the deployment logic in the private one, and suddenly your production environment is running code from three weeks ago. It creates this massive "merge debt" where the two versions of your project start to drift apart until they are basically different apps.
It is also just annoying for contributors. If I want to help Daniel with one of his projects, but half the logic for how the thing actually runs is hidden away in some private cave I cannot see, it makes it much harder for me to understand the full picture. It feels like I am only seeing the stage of a theater production, but all the lighting rigs and pulley systems are invisible. So, how do we fix this without exposing the crown jewels? How do we move toward a Single Source of Truth?
The shift we are seeing in twenty twenty-six is toward what I call the secretless architecture, or at least encrypted-in-git workflows. We need to move away from the "band-aid" approach of dot env files and toward a philosophy where the repository itself is secure by design. The first big tool everyone should look at is Mozilla S-O-P-S, which stands for Secrets Operations.
I have heard you mention S-O-P-S before. It sounds like something you would use to clean up a spill in the kitchen, but I assume it is more technical than that.
Much more technical. S-O-P-S allows you to encrypt specific parts of a file, like a Y-A-M-L or J-S-O-N file, directly inside your repository. The brilliant part is that it only encrypts the values, not the keys. So, if you are looking at a configuration file in a public repo, you can see that there is a field for a database password, but the actual password looks like a giant string of gibberish.
So I can see the structure of the config, which helps me understand how the app works, but I cannot actually log into Daniel's database and start deleting rows.
Precisely. You use a key management service like A-W-S K-M-S or Google Cloud K-M-S, or even just P-G-P keys, to handle the encryption. When the code runs in a C-I C-D pipeline, the runner has the permission to decrypt that file on the fly. It stays encrypted at rest in the repository, so even if the whole world can see the repo, they cannot see the secrets. It follows the Principle of Least Privilege: the secret only exists in plaintext at the exact moment it is needed by the application.
Now, I know some people also use something called git-crypt. How does that compare to S-O-P-S? Is it just a matter of preference?
They solve similar problems but in different ways. Git-crypt is more transparent—it encrypts entire files automatically when you commit them. It is great for things like binary files or entire configuration folders. However, S-O-P-S is generally considered the gold standard for cloud-native development because of that partial encryption feature. Being able to see the keys while the values are hidden is a huge win for readability and debugging. If I am a contributor and I see a Y-A-M-L file where everything is just a block of encrypted text, I have no idea what that file does. With S-O-P-S, I can see the schema.
That sounds a lot more elegant than having a private fork that you have to constantly rebase. But what about local variations? Sometimes I just want my local dev environment to point to a different port or use a different theme than the production one. Do I have to encrypt my local preferences too?
You could, but that is where a tool like direnv comes in. I know Daniel is big into automation, so he probably loves this one. Direnv is an extension for your shell that loads and unloads environment variables depending on your current directory. You create a file called dot env-r-c, and as soon as you C-D into that folder, those variables are active. When you leave the folder, they vanish.
So it prevents "shell pollution," where you have fifty different variables from five different projects all clashing in your terminal?
And the clever way to do it for an open-source project is to provide a template. You check in a file called config dot Y-A-M-L dot example or dot env-r-c dot example. New contributors just copy that, fill in their local details, and they are up and running without ever touching the core repository logic. It keeps the repository clean while allowing for infinite local overrides. You are essentially saying, "Here is the shape of the data I need, now go provide your own version of it."
I like that. It feels very sloth-friendly because it automates the context switching. I do not have to remember to set my variables every time I open a terminal. But let's talk about the heavy lifting. What if I have entire directories of deployment scripts or Terraform files that are just too specific to my private infrastructure to be public? Even if they are encrypted, maybe I just don't want that clutter in the public view.
This is where we get into some of the newer features in Git itself. In early twenty twenty-six, Git version two point forty-eight was released, and it included some massive optimizations for a feature called sparse-checkout.
Sparse-checkout sounds like what happens when I try to go grocery shopping and I only come home with a single bag of chips.
It is actually a way to tell Git that you only want to see a subset of the files in a repository. Imagine Daniel has a single repository. Inside that repo, he has a folder called public-code and a folder called private-configs. With sparse-checkout, a public contributor can clone the repo but tell Git to only actually download and show the public-code folder. The private stuff is still in the history, but it is not cluttering up their local machine.
Wait, if it is still in the history, isn't that a security risk? If I am a public contributor, can't I just change my sparse-checkout settings and see the private stuff?
If the repo is public, yes. Sparse-checkout is more about managing complexity and local clutter. If you truly need the files to be invisible and inaccessible to the public, you have to combine it with the encryption methods we talked about earlier. But for the dual-repo problem, sparse-checkout is a godsend for the maintainer. You can have everything in one place, but your local working environment only shows you what you need to see.
So, we are moving toward a world where the repository is less of a static bucket of files and more of a dynamic interface. But I want to go back to the "private-by-default" idea. I have seen some projects use git-submodules or even symlinks to pull in private data. Is that still a viable strategy in twenty twenty-six?
It is, but it is often more trouble than it is worth. Git-submodules are notoriously finicky—they are like that one relative who always shows up late to Thanksgiving and forgets what they were supposed to bring. If you use a submodule for your private configs, you are essentially back to the dual-repo problem, just with a slightly more integrated UI. A better approach we are seeing now is the "Modular Config" pattern. You design your application to look for configuration in a specific hierarchy: first, look for a local uncommitted file, then look for an encrypted file in the repo, and finally fall back to defaults.
That sounds like it integrates well with the whole Configuration as Code movement. If I am using Terraform or Pulumi for my infrastructure, how does this single-repo strategy change things?
It makes it much more powerful. When your infrastructure code lives in the same repo as your application code, you can use the same S-O-P-S encryption for your Terraform variables. You can have a folder called "infra" that contains all your deployment logic. By using local-only git hooks, you can ensure that you never accidentally commit a plaintext Terraform state file or a sensitive provider key.
I'm glad you brought up git hooks. I feel like they are the unsung heroes of the developer workflow.
They really are. Local-only git hooks are such an underrated part of this workflow. You can set up a pre-commit hook that runs a script to check for any unencrypted secrets or any files that shouldn't be there. And since the hook is local to your machine, it can be as aggressive as you want without bothering other contributors. It is like a little digital conscience. "Are you sure you want to do that, Corn? That looks like your private S-S-H key."
It is definitely better than the alternative, which is getting a notification from GitHub saying you just leaked your keys to the entire world. Speaking of which, I saw that GitHub actually rolled out something called A-I Enhanced Push Protection a few weeks ago, right around March twelfth.
They did. This is a huge deal for preventing the kind of leaks Daniel is worried about. In the past, push protection just looked for obvious things like high-entropy strings or known patterns for A-W-S keys. But the new version uses large language models to understand the context of the code. It can recognize if you are accidentally committing a configuration file that looks like it contains sensitive private data, even if it doesn't match a specific known key format. It blocks the commit before it even reaches the server.
That is actually a great use of A-I. It is like having a very paranoid junior developer looking over your shoulder and saying, "Hey, are you sure you want everyone to see your home router settings?"
It is necessary because as we use more A-I agents to help us write code, the risk of secret leakage actually goes up. We talked about this in episode ten seventy, the Agentic Secret Gap. If you have an A-I agent autonomously writing and committing code, it might not realize that the local config file it just read is something that should never be pushed to GitHub. The agent is just trying to be helpful and complete the task, but it doesn't have the "common sense" to know that certain files are off-limits.
That is a scary thought. We are giving these agents the keys to the kingdom, and they might just leave the front door wide open. So, if we can't trust the agents and we can't trust ourselves to remember every git-ignore rule, what is the ultimate solution?
The ultimate solution is moving away from long-lived secrets entirely. The Open Source Security Foundation recently published a white paper called The Death of the Dot-Env File. Their argument is that in twenty twenty-six, if you are still manually managing strings of characters for authentication, you are doing it wrong.
If I am not using a password or a key, how am I getting into my server? Do I just knock politely?
You use Open I-D Connect, or O-I-D-C, combined with Workload Identity Federation. This is a game changer for C-I C-D pipelines. Instead of Daniel storing a secret key in his GitHub Actions settings—which is another form of the dual-repo tax because that secret lives outside the repo—his GitHub runner can essentially present a digital I-D card to his cloud provider, like A-W-S or Azure.
How does that work in practice? Does the cloud provider just trust GitHub?
You set up a trust relationship. You tell A-W-S, "If a request comes from this specific GitHub repository and this specific workflow, you can trust it for sixty seconds." The cloud provider checks that card, sees that it is a legitimate request, and grants a temporary, short-lived token.
So there is no secret to leak because the secret only exists for about thirty seconds while the deployment is happening. Even if someone hacked the C-I C-D runner after the job finished, there would be nothing left to find.
It removes the need for those parallel repositories because the sensitive part—the authentication—is handled through a trust relationship between the platforms, not through a file in the repo. It is the realization of the "secretless" dream.
I love that. It feels much more secure and it gets rid of that nagging feeling that you have a ticking time bomb hidden in your settings. But Herman, we have to talk about the elephant in the room. What about the people who say that encrypting secrets in a repo is a bad idea because of quantum computing? If I put an encrypted file in a public repo today, someone can just download it and wait ten years until they have a quantum computer that can crack the encryption like an egg.
That is a very valid concern and it is a big part of the current debate in the DevOps community. Some people argue that the only truly safe way to handle secrets is to keep them strictly external, using tools like Doppler or OnePassword for Developers. In that model, your code just has a placeholder, and the actual secret is injected at the very last millisecond of execution.
It is like a spy movie where the agent only gets the combination to the safe right when they are standing in front of it.
It is. The trade-off is that it adds another dependency. If your secret manager goes down, your whole deployment pipeline breaks. But for someone like Daniel, who is very forward-thinking, he might decide that for his most sensitive stuff, an external manager is the way to go, while using S-O-P-S for the less critical configuration details. It is all about risk assessment.
It seems like the common thread here is that the dual-repo strategy is a solution to a problem that we have better ways of solving now. Whether it is encryption, sparse-checkout, or identity-based auth, we can finally have that single source of truth without sacrificing security. And the benefits for the open-source community are huge.
They really are. When you have a single repository, the onboarding experience for a new contributor is night and day. They clone one repo, they see the whole architecture, and they can use a tool like devcontainers to spin up a perfect replica of your environment in seconds.
Devcontainers! We haven't talked about those yet. How do they fit into this?
For a project like Daniel's, providing a devcontainer configuration is a great way to standardize the local environment. You can include all the tools we've talked about—S-O-P-S, direnv, the latest version of Git—all pre-configured in a Docker container. A new contributor just opens the project in V-S Code, and they have the exact same environment as Daniel, without him having to write a fifty-page ReadMe file. It encapsulates all that "private logic" into a standard, reproducible box.
Anything that reduces the amount of reading I have to do is a win in my book. But let's get practical. If Daniel wanted to start moving away from his dual-repo setup tomorrow, what is the first step?
The first step is an audit. Use a tool like Gitleaks or TruffleHog to scan your current public and private repos. You might be surprised at what is already lurking in your git history. Remember, even if you delete a secret in a new commit, it is still there in the history. You need a clean slate.
Once he has a clean slate, what's next?
I would start by implementing direnv for local variations. It is the lowest friction change and it immediately makes your local workflow feel more organized. Then, look at S-O-P-S for the files that absolutely must be in the repo but need protection. And finally, look at your C-I C-D pipeline. If you are still using static secrets in your GitHub settings, make it a priority to switch to O-I-D-C. It is a bit of a learning curve to set up the trust relationship with your cloud provider, but once it is done, you will never want to go back.
It feels like the overarching philosophy here is that open source isn't about exposing your infrastructure; it's about exposing your logic. You want people to see how you solved the problem, not the specific credentials you used to do it.
That is a great way to put it. We are moving toward a world where the infrastructure is just as much "code" as the application itself, but that doesn't mean it all has to be public. We just need better filters. We need to stop thinking of repositories as static folders and start thinking of them as secure, dynamic environments.
I think about how this applies to some of the stuff we've talked about before, like in episode twelve twenty-nine where we dug into secrets management. Back then, we were mostly focused on just not being the person who leaks their keys on Twitter. But now, in twenty twenty-six, the bar is higher. It is about professional-grade workflows that scale and allow for global collaboration without the "dual-repo tax."
It really is. And it is about making it easy for others to join in. If your project is a nightmare to set up because of a complex dual-repo structure, you are going to lose out on a lot of great contributions. You want the barrier to entry to be as low as possible, while the security remains as high as possible.
I wonder if we will eventually see GitHub or GitLab just build this kind of encryption directly into the interface. Like a "make this file private-in-public" checkbox.
That would be the dream. We are seeing bits of it with the push protection and the secret scanning, but a native, transparent encryption layer would be the final nail in the coffin for the dual-repo tax. Until then, the tools we have—S-O-P-S, direnv, O-I-D-C—are pretty fantastic if you take the time to set them up.
I'm just glad I don't have to watch you juggle those two laptops anymore. It was making me nervous. I thought you were going to accidentally push your bank statements to a public repo.
It was making me nervous too, Corn. I think I'm going to spend my afternoon setting up S-O-P-S for my latest project. It's time to retire the old "private fork" dance. It is a relic of a less secure era.
Just make sure you don't encrypt your grocery list by mistake. I don't think A-W-S K-M-S is going to help you remember to buy milk if you lose your private key.
I'll keep the milk list in plaintext, I promise. Though, given how much I like cheese, maybe my dairy intake should be a closely guarded secret.
Good. Well, I think we have given Daniel plenty to chew on. It is a complex topic, but the shift toward single-repo management is definitely where the industry is headed. It is more efficient, more secure, and frankly, a lot less stressful for the maintainer.
And it fits perfectly with the ethos of open source. Transparency where it matters, security where it counts. It allows us to build in public without being vulnerable in public.
We should probably wrap this up before you start reciting the Git manual from memory. I can see that look in your eyes, and it is a very nerdy look.
I make no apologies for my enthusiasm for version control systems, Corn. They are the backbone of modern civilization! Without Git, we would still be emailing zip files to each other like cavemen.
Okay, okay, calm down. Let's get out of here before you start explaining the internal data structure of a git blob.
Before we go, I want to give a big shout-out to our producer, Hilbert Flumingtop. He is the one who keeps this whole operation running smoothly while we are busy arguing about Git hooks and encryption algorithms.
And a huge thanks to Modal for sponsoring the show. They provide the G-P-U credits that power our whole generation pipeline, and we couldn't do this without them. If you need serverless G-P-U power that just works, check them out. They are doing for infrastructure what we are trying to do for repositories—making it simpler and more secure.
This has been My Weird Prompts. We really appreciate you spending some time with us today. We hope this helps you reclaim your velocity and kill that dual-repo tax once and for all.
If you found this episode helpful, or if you just want to see more of Herman's donkey-themed tech rants, leave us a review on your favorite podcast app. It really helps other people find the show and join the conversation.
We will see you in the next one. Keep your code open and your secrets closed.
Bye, Herman.
Bye, Corn.