#1782: Jenkins, GitHub, or Tekton? Picking Your 2025 CI/CD Engine

Jenkins is still the COBOL of DevOps, but the "one size fits all" model is dead. Here’s how to pick your pipeline.

0:000:00
Episode Details
Episode ID
MWP-1936
Published
Duration
22:05
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The state of CI/CD in 2025 is defined by fragmentation. The era of forcing every workflow into a single tool is over, replaced by a diverse ecosystem of specialized engines designed for specific environments, from cloud-native startups to enterprise-legacy mainframes. While the "just use Jenkins" era has technically passed, the old titan remains a dominant force, sitting at the heart of thousands of critical workflows.

Jenkins: The COBOL of DevOps
Despite being released in 2011, Jenkins retains massive market share in 2025. Its persistence is not just inertia but a technical moat built on an ecosystem of over 1,800 community-contributed plugins. This allows it to connect to virtually any system, from obscure 90s mainframes to private clouds in secure bunkers. However, this strength is also its greatest weakness. The "plugin hell" phenomenon means that a single security update can bring down an entire build fleet due to version mismatches in unmaintained plugins.

Efforts to modernize Jenkins, such as the "Configuration as Code" (JCasC) movement, attempt to treat the server as a disposable container defined in YAML. While this improves reproducibility, it doesn't eliminate the fundamental challenge: Jenkins is still a stateful server requiring management of JVM heap sizes, disk space for logs, and "zombie" workers that refuse to disconnect.

The Integrated Giants: GitHub Actions and GitLab CI
For new projects, the default choice is often GitHub Actions or GitLab CI. GitHub Actions has seen explosive growth, driven by proximity to code; if your repository is on GitHub, the friction of setting up a separate CI server is unnecessary. It has evolved from a "good enough" microwave into a powerful tool with a secure marketplace model where each action runs in isolation, avoiding the systemic risk of Jenkins' plugin architecture.

GitLab CI takes a different approach, positioning itself as a "Unified DevSecOps platform." It offers a cohesive single application for the entire lifecycle, integrating container registries, security dashboards, and built-in Kubernetes clusters. This makes it particularly attractive to high-compliance organizations like banks and healthcare providers, which require strict control over data residency and the ability to hit a "kill switch" that cloud-first models sometimes struggle to provide.

Kubernetes-Native Orchestration: Tekton and Argo
The most significant philosophical shift is the move toward container-first, Kubernetes-native tools like Tekton and Argo Workflows. These tools treat the cluster itself as the execution environment. Tekton, built on Kubernetes Custom Resource Definitions (CRDs), allows the cluster to natively understand pipeline concepts like "Tasks" and "Pipelines." Instead of running a Jenkins agent that executes a Docker command, you tell Kubernetes to fetch a container and execute steps directly. This enables dynamic composition and scales CI infrastructure using the same tools as production apps, such as Horizontal Pod Autoscalers and spot instances.

Argo Workflows is broader, functioning as a container-native workflow engine used for machine learning, data processing, and complex deployments. Its "killer app" is Argo CD, which implements the GitOps pattern. Instead of pushing code to a server, the cluster pulls the desired state from Git and "heals" any manual deviations, representing a fundamental shift from imperative to declarative operations.

Specialized and Portable Solutions
The market also includes commercial powerhouses and specialized tools. CircleCI optimizes for speed to feedback with features like "Test Splitting," which automatically balances test suites across parallel containers. Bitrise has become the default for mobile development, handling the specific friction of iOS and Android builds, such as managing macOS runners and provisioning profiles.

Finally, the "engine-agnostic" movement is gaining traction with tools like Dagger. Created by Solomon Hykes (creator of Docker), Dagger addresses the "it works on my machine" problem by moving pipeline logic out of proprietary YAML and into actual code (Python, Go, TypeScript). This "Pipeline as Code" approach allows the exact same script to run locally and in CI, with an intelligent caching system that avoids rebuilding unchanged parts of massive monorepos. As AI begins to creep into these pipelines, the landscape continues to evolve, offering teams more choice—and more complexity—than ever before.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#1782: Jenkins, GitHub, or Tekton? Picking Your 2025 CI/CD Engine

Corn
Alright, we are diving into the plumbing today. Today's prompt from Daniel is about the state of CI/CD tools in twenty twenty-five, and it is a massive landscape. We are moving way beyond the "just use Jenkins" era, though as Daniel points out, the old titan is still very much in the room.
Herman
It is the ultimate "can't live with it, can't live without it" software. Herman Poppleberry here, by the way. I have been looking at the data for this year, and what is fascinating is how much the "one size fits all" model has just completely shattered. We have gone from a world where everyone tried to force their workflow into one tool, to a world where teams are picking highly specialized engines based on whether they are cloud-native, mobile-first, or enterprise-legacy.
Corn
It feels like the stakes have shifted too. It used to be just about "did the build pass?" Now, it is about security scanning, cost orchestration, and whether your pipeline can handle thousand-node Kubernetes clusters without falling over. Oh, and fun fact before we get too deep into the weeds—today’s episode is actually powered by Google Gemini Three Flash. It is the one helping us organize this massive survey of tools.
Herman
Which is fitting, because AI is starting to creep into these pipelines in ways we will talk about later. But let’s start with the elephant in the room. Jenkins. It was released in twenty eleven—fourteen years ago—and yet, if you look at the twenty twenty-five surveys, it is still sitting there with a massive market share.
Corn
It is the COBOL of DevOps. You want to hate it, you want to move away from it, but every time you try, you realize you have ten thousand lines of Groovy script that no one wants to rewrite. Why does it persist, Herman? Is it just inertia, or is there a technical "moat" there?
Herman
It is the plugin ecosystem. That is the paradox. Jenkins has over eighteen hundred community-contributed plugins. If you have some weird, obscure mainframe from the nineties that needs to trigger a deployment to a private cloud in a bunker, there is a Jenkins plugin for that. But that is also its downfall. We call it "plugin hell" for a reason. You update one security patch and suddenly your entire build fleet is offline because of a version mismatch in a plugin that hasn't been maintained since twenty eighteen.
Corn
I have seen teams try to solve that with the "Jenkins Configuration as Code" movement. Basically trying to treat the snowflake server like a disposable container. Does that actually work in practice, or is it just putting a tuxedo on a donkey? No offense to your species, Herman.
Herman
None taken! I am a proud donkey, and even I know when a load is too heavy. Jenkins Configuration as Code, or JCasC, is a valiant effort. It allows you to define your entire Jenkins master—plugins, security realms, nodes—in a YAML file. It definitely makes it more reproducible. But at the end of the day, you are still managing a stateful server. You still have to worry about the JVM heap size, the disk space for build logs, and the "zombie" workers that refuse to disconnect.
Corn
Which is exactly why the world moved toward GitHub Actions and GitLab CI. It feels like these two have basically captured the "default" spot for any new project. If you are starting a company in twenty twenty-five, are you even looking at anything else?
Herman
For most people, no. GitHub Actions in particular has seen explosive growth—something like forty percent year-over-year growth in workflow runs recently. The reason is simple: proximity to the code. If your code lives on GitHub, why would you go through the friction of setting up a separate CI server, managing webhooks, and dealing with cross-tool authentication?
Corn
It is the "good enough" trap, right? It might not be the most powerful engine, but it is right there. It is like the microwave in your kitchen. You could go to a five-star restaurant, but the microwave is three feet away and it gets the job done in two minutes.
Herman
That is a fair comparison, but GitHub Actions has actually become quite a powerful microwave. Their "Actions Marketplace" has essentially replaced the Jenkins plugin library, but with a much better security model. Because each action runs in its own isolated container or environment, you don't get that "plugin hell" where one bad script takes down the whole system. What is really interesting is how GitLab CI approaches it differently. GitLab isn't just a tool; it is a "Unified DevSecOps platform."
Corn
They love that term. "Single application for the entire lifecycle."
Herman
They do, but they back it up. GitLab's integrated container registry, security dashboards, and built-in Kubernetes clusters make it feel much more cohesive than GitHub's "collection of parts" approach. If you are a high-compliance organization—think banking or healthcare—GitLab's self-managed version gives you a level of control over the data residency and the "kill switch" that GitHub's cloud-first model sometimes struggles with.
Corn
So we have the legacy titan and the integrated giants. But then there is this whole other world that has emerged in the last few years—the container-first, Kubernetes-native stuff. I am talking about Tekton and Argo Workflows. This feels like a different philosophy entirely. It is not just "running a script," it is "orchestrating a cluster."
Herman
This is where we get into the heavy engineering. Tekton is fascinating because it is built on Kubernetes Custom Resource Definitions, or CRDs. In a traditional CI tool, the "pipeline" is a proprietary concept the tool understands. In Tekton, a "Task" or a "Pipeline" is literally an object in the Kubernetes API.
Corn
So you are saying the cluster itself knows how to build software?
Herman
Precisely. You aren't running a "Jenkins Agent" that then runs a docker command. You are telling Kubernetes: "Here is a TaskRun object, go fetch this container and execute these steps." This allows for incredible dynamic composition. You can scale your CI infrastructure using the exact same tools you use to scale your production app—Horizontal Pod Autoscalers, spot instances, the works.
Corn
I remember reading a case study about a SaaS company that migrated from a massive Jenkins cluster to GitHub Actions, and they cut their maintenance time by sixty percent. But if you go from GitHub Actions to something like Tekton, does the complexity go back up? It feels like you need a Ph.D. in Kubernetes just to print "Hello World."
Herman
The complexity definitely shifts. You are trading "maintenance of a snowflake server" for "maintenance of a platform." If you are a five-person startup, Tekton is overkill. You will spend more time debugging your YAML than writing code. But if you are a platform team at a Fortune five hundred company supporting five thousand developers, Tekton is a godsend. It gives you a standardized, "white-label" foundation that you can build your own internal developer platform on top of.
Corn
And then there is Argo Workflows. I see Argo mentioned constantly in the same breath as "GitOps." Is it a CI tool, a CD tool, or just a general-purpose "do things in order" tool?
Herman
It is a container native workflow engine. While Tekton is very focused on the "build-test-deploy" cycle, Argo is broader. It is used for machine learning pipelines, data processing, and complex multi-step deployments. According to recent data, over sixty percent of Fortune one hundred companies are using Argo for their Kubernetes-native automation. The "killer app" there is Argo CD, which implements the GitOps pattern. It watches your Git repo and ensures the cluster state matches the code. If someone manually changes a setting in the cluster, Argo sees it and "heals" it back to what the code says.
Corn
That is a huge shift in mindset. We used to "push" code to a server. Now the cluster "pulls" the desired state. It is like the difference between me throwing a ball at you and you just deciding you want to be holding a ball and then making it manifest in your hands.
Herman
That is a very "sloth-like" way of looking at physics, Corn, but it is accurate for software. Now, we should mention the commercial powerhouses because they are still doing some very cool things. CircleCI, Travis CI, Bitrise... people often ask why they should pay for these when GitHub Actions is "free" for many.
Corn
"Free" often means "you spend forty hours a month fixing your own runners."
Herman
That is the value proposition. Take CircleCI. They have spent years optimizing the "speed to feedback" loop. Their "Orb" system is great, but their real edge is in the runner optimization—things like "Test Splitting" where the tool automatically looks at your test suite, calculates which tests take the longest, and balances them across parallel containers to ensure the build finishes in the shortest time possible.
Corn
I have seen teams try to do that manually with scripts, and it is a nightmare to maintain. You end up with one container that finishes in two minutes and another that takes twenty, and your whole pipeline is stuck waiting for that one slowpoke.
Herman
And then you have Bitrise, which has basically become the default for mobile development. Building iOS and Android apps is a special kind of torture. You need macOS runners for Xcode, you need to manage provisioning profiles, certificates, and Play Store keys. Bitrise handles all of that "mobile-specific" friction. If you are building a React Native or Flutter app in twenty twenty-five, you are almost certainly using Bitrise or maybe Codemagic because the general-purpose tools just don't have the specialized hardware and software stacks ready to go.
Corn
It is the "abstraction tax." You pay a premium so you don't have to think about the underlying mess. But what about the "engine-agnostic" movement? I have been hearing a lot about Dagger lately. Solomon Hykes, the creator of Docker, is behind it, right?
Herman
Yes, and Dagger is trying to solve the "it works on my machine" problem once and for all. Think about it: right now, your CI pipeline is usually written in some proprietary YAML—GitHub Actions YAML, GitLab YAML, CircleCI YAML. If you want to run that same logic on your laptop to debug a failing test, you can't. You have to commit, push, and wait ten minutes for the CI to run.
Corn
It is the most frustrating part of being a developer. The "Commit, Push, Pray" cycle.
Herman
Dagger changes that by moving the logic out of the YAML and into code—Python, Go, or TypeScript. You write your pipeline as a script using the Dagger SDK. That script runs in a containerized engine. Because the engine is portable, you can run the exact same command on your MacBook and in the GitHub Actions runner. The YAML just becomes a "trigger" that calls the Dagger script.
Corn
That feels like a return to the "build script" era, but with twenty-first-century containerization. It is like we realized that YAML isn't a programming language and we finally decided to go back to actual code.
Herman
It is the "Pipeline as Code" versus "Pipeline as Data" debate. YAML is data. It is great for simple things, but as soon as you need a "for loop" or complex conditional logic, YAML becomes a mess of nested strings and custom tags. Dagger says, "Just use a programming language." It also has this incredible "Action Caching" system. It calculates a hash for every single step in your build. If you haven't changed the files relevant to a specific step, Dagger just pulls the result from the cache, even if you are running it on a completely different machine.
Corn
That could be a game-changer for monorepos. If I have a massive repository with fifty microservices, and I only change one line in one service, I don't want to rebuild the other forty-nine.
Herman
And that brings us to the specialized delivery tools. We have talked a lot about the "CI" part—Continuous Integration. But the "CD" part—Continuous Delivery—is where things get really hairy at scale. That is where Spinnaker comes in.
Corn
Spinnaker is the Netflix tool, right? It always sounded a bit like a "Death Star" of deployment tools. Massive, powerful, but maybe a bit dangerous if you don't know what you are doing.
Herman
It is definitely an enterprise-grade weapon. Spinnaker was designed for multi-cloud deployments. If you need to deploy the same application to AWS, Google Cloud, and an on-premise OpenStack cluster simultaneously, while doing a "Canary" deployment where only one percent of traffic goes to the new version, Spinnaker is the only tool that handles that natively.
Corn
Explain the "Canary" thing for a second, because I think people hear these terms but don't realize how much engineering goes into them.
Herman
Sure. In a traditional deployment, you just swap the old version for the new one. If the new one has a bug, everyone crashes. In a "Canary" deployment—named after the canary in the coal mine—you deploy the new version to a tiny slice of your infrastructure. Spinnaker then automatically monitors your metrics—error rates, latency, CPU usage. If it sees that the "Canary" version is performing worse than the "Baseline" version, it automatically rolls back the deployment before most users even notice.
Corn
That is the "Reliability" part of the equation. But man, the setup for Spinnaker is legendary for being difficult. I think I once saw a diagram of its architecture and it had like twelve different microservices just to run the deployment tool itself.
Herman
It is a lot to manage. That is why we are seeing a shift toward "managed Spinnaker" or tools like Harness. Harness takes that "automated canary" and "AI-driven rollback" logic and provides it as a SaaS. They are leaning heavily into the "AI" aspect Daniel mentioned in his notes.
Corn
Alright, let's talk about the AI. It is twenty twenty-six, everything has "AI" slapped on it. Is there actual substance here in the CI/CD world, or is it just a marketing gloss?
Herman
There is some real substance, particularly in "Predictive Build Failure" and "Log Analysis." Imagine you have a test suite that takes two hours to run. An AI-integrated CI tool can look at the code changes you just made and say, "Statistically, these changes only affect these three modules. I am going to run those tests first." Or even better, "Based on past failures, this change is ninety percent likely to break the integration test in the checkout module. I am going to alert the developer before the build even starts."
Corn
That is actually useful. It is like having a very obsessive senior developer looking over your shoulder. What about the log analysis? Because staring at fifty thousand lines of Jenkins console output is my personal idea of hell.
Herman
That is exactly where LLMs shine. Tools are now starting to pipe build failures into models that can summarize the error. Instead of "Exit Code One" followed by a stack trace, you get: "The build failed because you updated the library version in line forty-two of the package file, but you didn't update the initialization call in the main class." It can even suggest the fix or open a pull request to correct it.
Corn
I can see the "Security" side of this being huge too. We are seeing tools like Snyk and Aqua Security being integrated directly into the pipeline. It is not just about "does it run," it is about "is it a ticking time bomb?"
Herman
The "Shift Left" movement is in full swing in twenty twenty-six. We are moving security scanning from "something we do once a quarter" to "something that happens every time you save a file." This is where Azure DevOps is actually a secret weapon for many large organizations.
Corn
Daniel mentioned Azure DevOps being popular for large teams. It always felt a bit "corporate" to me, but I guess if you are an "all-in" Microsoft shop, it makes sense?
Herman
It is incredibly cohesive. If you are using Azure for your cloud, Active Directory for your identity, and Teams for your communication, Azure DevOps ties it all together. It handles your boards, your repos, your test plans, and your pipelines in one interface. For a project manager at a large bank, that "one pane of glass" is worth its weight in gold. They don't want to hunt through five different tools to see if a feature is ready for release.
Corn
So, we have covered a lot of ground. We have the "Old Guard" in Jenkins. The "New Defaults" in GitHub and GitLab. The "Kubernetes Natives" in Tekton and Argo. The "Mobile Specialists" like Bitrise. The "Portable Engines" like Dagger. And the "Enterprise Shifters" like Spinnaker and Azure DevOps. If you are a developer listening to this, how do you even begin to choose?
Herman
I think you have to look at your "Constraint" first. If your constraint is "I have zero time to manage infrastructure," you go with GitHub Actions or a managed service like CircleCI. If your constraint is "I have massive, complex deployments across multiple clouds," you look at Spinnaker or Harness. If your constraint is "I need my local development to match the CI exactly," you look at Dagger.
Corn
And if your constraint is "I have ten years of legacy scripts and no budget to rewrite them," you stay on Jenkins and maybe try to clean it up with JCasC.
Herman
I mean... you are right. Sorry, I almost used the forbidden word there. You are spot on. One thing we haven't touched on is the "Cost" aspect. In twenty twenty-five, "Cloud Bill Shock" from CI/CD is a real thing. These tools are so easy to scale that you can accidentally spend ten thousand dollars in a weekend because someone triggered a recursive build loop.
Corn
I have been there. It is the DevOps equivalent of leaving the garden hose running while you go on vacation.
Herman
This is driving a trend toward "Self-Hosted Runners" on public clouds. Teams are using tools like "Actions Runner Controller" for Kubernetes. It allows you to use the GitHub Actions "brain" in the cloud, but the actual "brawn"—the compute power—runs on your own servers or on cheap spot instances that you control. It is a middle ground between the convenience of SaaS and the cost-control of self-hosting.
Corn
It feels like the industry is maturing. We went through a phase of "automate everything at any cost," and now we are in the "automate intelligently and sustainably" phase. Which brings up a question: what does the tool of twenty thirty look like? Are we even going to be writing "pipelines" in five years?
Herman
I think the "Pipeline" as we know it will start to disappear. We will move toward "Intent-Based Delivery." Instead of telling the computer "Step one: build. Step two: test. Step three: deploy," we will say, "Ensure this code is running in production with ninety-nine point nine percent availability and no high-severity vulnerabilities." The underlying "orchestrator"—which will be heavily AI-driven—will figure out the necessary steps to make that happen.
Corn
That sounds amazing and terrifying. It is the ultimate "No-Ops" dream. But we are still a long way from that. For now, we are stuck with YAML and Groovy and the occasional "Why is this build still pending?" frustration.
Herman
It is a journey. But the tools we have now are light-years ahead of what we had even five years ago. The fact that a single developer can set up a global deployment pipeline in an afternoon using GitHub Actions is a miracle of modern engineering.
Corn
Alright, let's wrap this up with some practical takeaways. If you are a listener and you are feeling overwhelmed by this survey, here is the "Corn and Herman" cheat sheet.
Herman
Number one: If you are starting a new project and your code is on GitHub or GitLab, just use their built-in CI. Period. Don't overthink it. You can always migrate later if you hit a wall, but the speed of starting is more important than finding the "perfect" tool.
Corn
Number two: If you are feeling the pain of "it works on my machine but breaks in CI," look at Dagger. Even if you don't move your whole pipeline there, using it for your most complex build steps can save you hours of debugging.
Herman
Number three: If you are running on Kubernetes and you find yourself struggling with "scripts inside containers," look at Argo Workflows or Tekton. Embrace the native way of doing things rather than fighting the cluster.
Corn
And number four: If you are doing mobile, just go to Bitrise. Don't try to build your own Mac Mini farm in a closet. It is not twenty fourteen anymore. Your time is worth more than the subscription fee.
Herman
Well said. This has been a fun deep dive. It is easy to get caught up in the "new and shiny," but the "old and reliable" still has its place. The key is knowing which one you need for the job at hand.
Corn
Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the GPU credits that power this show's generation pipeline.
Herman
If you found this survey helpful, we would love it if you could leave us a review on Apple Podcasts or Spotify. It genuinely helps other nerdy brothers find the show.
Corn
This has been "My Weird Prompts." You can find all seventeen hundred-plus episodes and our RSS feed at myweirdprompts dot com.
Herman
Catch you in the next one.
Corn
Later.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.