Daniel sent us a voice message while he was doing some spring cleaning, which I find deeply relatable. Nothing like organizing your sock drawer while listening to people talk about agent-to-agent protocols. His question, distilled, is this — we talk about the technical plumbing of agentic AI and MCP constantly, but where is the professional identity forming around this stuff? Where are the communities, the conferences, the standards bodies, the career tracks? He wants to know where the builders are actually gathering and exchanging ideas, especially for someone based in Israel who might feel a bit outside the usual tech hubs.
By the way, DeepSeek V four Pro is writing our script today. So if anything sounds unusually coherent, that's why.
I'll take that as a compliment to our usual incoherence.
This question hits on something I've been thinking about for months. We're in this weird moment where the infrastructure is maturing faster than the professional identity around it. The plumbing exists, the protocols are being standardized, but ask someone "what's your job title" and they'll probably say something vague like "AI engineer" or "developer who messes with agents." That's not a profession yet. That's a hobby that pays.
Which is exactly the gap Daniel's pointing at. When does it stop being something you tinker with and start being something you are?
And the signals are actually there if you look. Let's start with the most concrete one — the Linux Foundation. In March of this year, they announced they're hosting the Open Agent Standard, or OAS. This is a foundation explicitly modeled on what they did for cloud-native computing with the CNCF. The goal is vendor-neutral governance for agent interoperability standards. And the founding members include Google, Anthropic, Microsoft, Salesforce, SAP, and about two dozen others.
That's not a small signal. The Linux Foundation doesn't get involved when something is just a passing hype cycle. They're infrastructure people. They think in decades.
And what's interesting is what they're specifically tackling. The announcement said they're focusing on three things — agent identity and discovery, agent-to-agent communication, and agent security. That's not a product. That's an ecosystem. They're building the equivalent of what HTTP and DNS did for the web. And they're doing it through working groups that meet regularly, with published minutes and contribution guidelines.
If Daniel's asking "where's the professional association," part of the answer is that it's forming inside the Linux Foundation's working groups right now. Those are the rooms where the standards are being debated. And the nice thing about open source foundations is that geography matters less than it used to. You can contribute to a working group from Jerusalem just as easily as from San Francisco.
The time zones are brutal. I've done enough late-night calls with contributors in Asia to know that.
You stayed up until three in the morning arguing about a JSON schema once. You were very proud.
It was a very important schema. But let me talk about the other major standardization effort, because it's directly relevant to what Daniel's been building with MCP. In April of last year, Google announced the Agent-to-Agent Protocol, or A2A. This is separate from MCP but complementary. MCP is about how agents connect to tools and data. A2A is about how agents talk to each other. Google open-sourced it under Apache two point zero, and they've already got implementations in Python, JavaScript, and Go.
This is where it gets interesting from a professionalization standpoint, because A2A didn't emerge from a vacuum. Google worked with about fifty partner companies on the initial specification. They had an early adopter program that included SAP, Salesforce, ServiceNow, and a bunch of others. So you're seeing the formation of a multi-vendor coalition around a shared standard, which is exactly the kind of thing that eventually generates certification programs and training curricula.
The key detail is that A2A is designed around something called "agent cards." Think of them as business cards for AI agents. They describe what an agent can do, what its capabilities are, how to authenticate with it, what its endpoints are. It's a discoverability mechanism. And that's the kind of standardization that makes professional tooling possible. Once you have a standard way to describe agent capabilities, you can build testing frameworks, compliance checkers, certification exams.
The parallel that keeps coming to mind is what happened with Kubernetes between roughly twenty fifteen and twenty twenty. At first it was just an open source project. Then the CNCF formed around it. Then you got certified Kubernetes administrator exams. Then you got an entire conference ecosystem — KubeCon went from a few hundred people to tens of thousands. And now "Kubernetes engineer" is a job title that recruiters understand and can filter for.
KubeCon is actually a useful data point, because the CNCF is now explicitly adding agentic AI tracks to their events. KubeCon Europe this year had a dedicated "AI and ML" track that was heavily focused on agent orchestration. The CNCF's own surveys show that about forty percent of organizations are running some form of AI workload on Kubernetes now. The communities are merging.
The conference piece of Daniel's question — where do you go to meet other builders? The answer is partly that the existing cloud-native conference circuit is absorbing this stuff. KubeCon, Open Source Summit, even reInvent had agentic AI workshops this year. But there are also dedicated events emerging.
The AI Engineer World's Fair in San Francisco, for example. That's become one of the main gatherings for people who are actually building with these tools, as opposed to just talking about them. It started in twenty twenty-four and by this year it had over five thousand attendees. The focus is explicitly on engineering — not research, not product demos, but actual implementation. MCP was a major topic there this year.
That distinction matters for professionalization. A profession needs a body of practical knowledge, not just theoretical understanding. The AI Engineer World's Fair is basically saying, "we're the people who make this stuff work in production." That's a professional identity forming in real time.
There's also the Agent Summit in New York, which is newer — it launched this year — and focuses specifically on enterprise agent deployment. That's the other end of the spectrum. You've got the builder-focused conferences and the enterprise-adoption conferences, and the fact that they're separate categories already tells you something about how the field is differentiating.
Daniel mentioned being based in Israel and sometimes feeling geographically out of the picture. I actually think the geography question is worth addressing directly, because it's not obvious.
It's not, and I think the honest answer is that the center of gravity is still in the Bay Area. Anthropic is there. Google's AI teams are there. OpenAI is there. The specification authors for both MCP and A2A are mostly there. If you want to be in the room where the standards are debated, that's still the room.
— and this is a significant but — the MCP community specifically has turned out to be surprisingly distributed. Because it's an open protocol, not a product, the contribution model doesn't require colocation. The MCP GitHub repository has contributors from, I want to say, over thirty countries at this point. The specification discussions happen in GitHub issues and pull requests. The community calls are on video.
There's an MCP community Discord that's become a real hub. It's not official — it's community-run — but a lot of the server developers and tool builders hang out there. And I've seen people organize local meetups through it. There was an MCP hackathon in Tel Aviv just a couple of months ago, actually.
Which Daniel probably knows about, but if he doesn't, that's worth flagging. The Israeli tech scene has been surprisingly quick to pick up on MCP. There's a natural fit — Israel has a lot of infrastructure and devtools startups, and MCP is fundamentally a devtools protocol. I wouldn't be surprised if there's a regular meetup forming.
The broader pattern is that MCP and A2A are both creating what open source communities always create — a mix of online collaboration and periodic in-person intensives. You do the day-to-day work on GitHub and Discord, and then you fly to San Francisco or London for the conference where you actually meet the people whose pull requests you've been reviewing.
Which is exactly what Daniel described, right? "Sometimes you hop on a plane just for a few meetings that end up being really worthwhile." That's the model. It's not ideal — the carbon footprint alone is questionable — but it's the current reality.
Let me circle back to the Linux Foundation's OAS announcement, because there's a detail I want to pull out. One of the explicit goals they listed is "developing a curriculum and certification framework for agentic AI professionals." That's a direct quote from their charter document. They're not just thinking about standards. They're thinking about credentials.
Which answers another piece of Daniel's question. The certification gap he identified — "we're a little too early to see worthwhile vendor certifications" — is being actively filled by the vendor-neutral foundation model. And historically, that's the certification model that actually holds value. Vendor certifications come and go with product cycles. Foundation certifications tend to outlast individual companies.
The Certified Kubernetes Administrator exam is a good benchmark. It's practical, it's hands-on, you have to actually configure and troubleshoot a real cluster. It's not multiple choice. And it's respected because it's hard and vendor-neutral. If the Linux Foundation follows the same model for agentic AI — and their charter suggests they will — you're looking at a certification that actually signals competence, not just course completion.
Though we should be honest about the timeline here. The OAS was just announced in March. Working groups take months to produce initial specifications. Curriculum development takes another year at minimum. Realistically, the first credible agentic AI certifications are probably a late twenty twenty-seven or early twenty twenty-eight thing.
That sounds right. But the foundations are being laid now. And for someone like Daniel who's already deep in the weeds, that's actually an opportunity. The people who contribute to the early specifications and working groups are the ones who end up shaping what gets taught. If you wait until the certification exists, you're a consumer. If you're in the room while it's being designed, you're a creator.
That's the professionalization play in a nutshell, isn't it? Don't wait for the profession to exist. Help define it.
There's a practical path to doing that. The MCP specification repository on GitHub has a contributors guide. The A2A protocol has an open source repository with issues tagged "good first issue." The Linux Foundation has a call for working group participants. None of this requires being in San Francisco. It requires being willing to read specifications and write thoughtful comments.
I want to push on something, though. We're talking about professionalization as if it's an unambiguously good thing, and I think there's a tension here worth examining. Professionalization brings gatekeeping. It brings credentialism. It brings the thing where you can't get a job without the certification and you can't get the certification without the job. Are we sure we want that for agentic AI?
That's a fair concern. I've seen it happen in other fields. DevOps was this scrappy, practical movement for years, and then the certifications arrived and suddenly there was a "right way" to do DevOps and a lot of the original practitioners felt alienated.
The agentic AI space right now is wonderfully open in a way that professionalization could threaten. You can contribute to the MCP specification by opening a GitHub issue. You can write a server implementation in a weekend and share it. The barrier to entry is basically zero beyond knowing how to code and being willing to read documentation.
The counterargument, though, is that the lack of professional structure is also a barrier. It's a barrier for hiring managers who can't tell who actually knows what they're doing. It's a barrier for people who want to invest in learning but don't know what's worth learning. It's a barrier for organizations that want to adopt the technology but can't find qualified people. Some gatekeeping, done well, actually opens gates.
That's the paradox of professional standards. A well-designed certification doesn't keep people out. It gives them a path in. The problem is when the certification becomes more about revenue than competence, or when it's designed to protect incumbents rather than welcome newcomers.
The Linux Foundation's track record here is actually pretty good. The CKA exam costs about four hundred dollars, which isn't nothing, but it's not the thousands that some vendor certifications charge. And it's genuinely skills-based. You can't brain-dump your way through it. If the agentic AI certification follows that model, it could actually be a net positive for accessibility.
Let me shift gears slightly and talk about something Daniel mentioned that I think is underappreciated — the whiteboard thing. He described taking a photo of a whiteboard diagram and feeding it to Claude to get architecture feedback. That's not just a neat trick. That's a new mode of professional development that didn't exist before.
It really is. The traditional model of building expertise is you read books, you take courses, you maybe find a mentor. The AI-assisted model is you attempt to build something, you get stuck, you externalize your thinking — whiteboard, notes, a rambling voice memo — and then you get substantive feedback on your actual design. It's learning through critique rather than learning through instruction.
That changes what professionalization looks like. If the primary mode of skill development is iterative design with AI feedback, then the credential that matters isn't "I passed a test." It's "I've built things, and I can show you the design decisions I made and why." Portfolio over certificate.
I think both will matter. The portfolio shows what you can do. The certification shows you understand the standards well enough to collaborate effectively with others. In a field that's all about protocols and interoperability, understanding the standards is important.
Let's talk about where the ideas are actually being exchanged. Daniel asked specifically about that — where are the discussions happening that shape the development of MCP and agentic AI more broadly. I think the answer is more fragmented than people expect.
It's very fragmented. There's no single watercooler. You've got the MCP GitHub repository for specification-level discussions. You've got the community Discord for implementation questions and server announcements. You've got a few active subreddits — the Claude subreddit has a lot of MCP discussion, and there's an emerging agentic AI subreddit that's still small but high-signal. You've got Twitter, or X, or whatever we're calling it now, where a lot of the researchers and engineers post their hot takes.
Then there's the blogosphere, which has actually had a renaissance around this topic. Anthropic's research blog, obviously. But also independent engineers writing detailed technical posts about their MCP implementations. Some of the best thinking I've seen on agent architecture has been in personal blog posts with maybe fifty readers.
The newsletter ecosystem is worth mentioning too. There are several newsletters now that curate MCP and agentic AI developments weekly. They're a good way to keep up without having to monitor five different platforms. And they often surface those obscure blog posts that would otherwise get missed.
What's interesting to me is that the fragmentation actually serves a purpose. Different platforms host different kinds of discussions. GitHub is for rigorous technical debate about specification details. Discord is for practical troubleshooting and community building. Twitter is for hot takes and announcements. Blogs are for deep dives. The fragmentation isn't a bug — it's a reflection of how multi-layered the professional community already is.
That's actually a sign of maturation, not immaturity. A field that has only one forum is a field that hasn't differentiated yet. The fact that MCP discussions are happening in different modes on different platforms suggests the community is developing internal structure.
Let me ask you something, Herman. If someone like Daniel wanted to get more involved — not just consume the discussions, but contribute to them — where would you point him first?
GitHub, hands down. Find an MCP server implementation you use or care about, look at the open issues, and submit a pull request. It doesn't have to be a big contribution. Fix a typo in the documentation. Add a test case. The barrier to entry for open source contribution is low, and it's the fastest way to build reputation in the community.
Reputation is the currency of professionalization before formal credentials exist. If you've got a track record of useful contributions to MCP servers, that's your credential. It's more legible to the people who matter than any certificate would be.
The second thing I'd recommend is showing up to the community calls. The MCP specification working group has regular video calls that are open to anyone. You can just listen at first. Get a sense of who the key voices are and what the current debates are about. Then eventually you'll have something to contribute.
The third thing, which is underrated — write about what you're building. Even if it's just a thread on the Discord or a short blog post. The act of articulating your design decisions forces you to think more clearly about them. And it creates a public record of your thinking that other people can find and respond to.
That's actually how several of the prominent voices in the MCP community got started. They weren't necessarily the best engineers. They were the ones who wrote clearly about what they were doing and why. Writing is a professional skill in itself, and it compounds.
I want to zoom out for a moment and talk about something Daniel alluded to that I think is important — the shift from "how can I avoid conferences" to "how can I find a good conference to go to." That's a mindset shift from consumer to participant, and it's the threshold of professionalization in any field.
It really is. When you're a consumer of a technology, conferences feel like a chore — bad coffee, windowless rooms, sales pitches disguised as talks. When you're a practitioner, the same conference feels completely different because you're there to find your people. You're not receiving information. You're building relationships.
The quality of a conference for a practitioner is almost entirely about the hallway track. The talks are secondary. What matters is whether the right people are in the room and whether the structure creates opportunities for conversation. The best conferences for agentic AI right now are the ones that understand this and design for it.
The AI Engineer World's Fair does this well, from what I've heard. They have dedicated unconference space, they have long breaks between sessions, they have explicit networking events that aren't just awkward cocktail hours. It's designed for practitioners to find each other.
There's also an emerging circuit of smaller, more focused events. Agent builder meetups. These are harder to discover because they're not heavily marketed, but they're often more valuable than the big conferences because everyone in the room is actually building something.
This is where being in a place like Israel can actually be an advantage for the small-event strategy. Tel Aviv has a dense tech scene. If you can find even a dozen other people who are serious about agentic AI, you can start a regular meetup. You don't need a conference to come to you. You can build the community locally.
That's the grassroots thing Daniel mentioned. The MCP community really is surprisingly open and distributed in a way that rewards local initiative. If there isn't a meetup in your city, start one. Post about it in the Discord. You'll probably find people you didn't know were interested.
Let me pull on another thread Daniel raised — the professional association question. "Can I join? Is there any association for people building with agentic AI?" The short answer is no, there isn't a formal professional association yet. No equivalent of the ACM or the IEEE for agentic AI specifically.
The slightly longer answer is that the Linux Foundation's OAS working group is effectively functioning as a proto-association. It has membership, it has governance, it has working groups, it has regular meetings. The structure is there even if the name and the branding aren't.
I'd argue that the informal communities are serving the social function of a professional association even without the formal structure. The Discord servers, the meetup groups, the conference circuits — these are where people find mentors, share job opportunities, discuss ethical questions, and develop shared norms. The formal association will come, but the community is already doing the work.
There's an interesting question about whether a formal association is even necessary. Some fields professionalize without one. Web development never really had a central professional body, and it matured just fine. The standards bodies and the conference circuit and the open source communities provided enough structure.
Web development is actually a useful comparison because it's also protocol-driven and open-source-native. The W three C sets the standards. Browser vendors implement them. Developers learn through documentation, community, and practice. There's no "certified web developer" credential that anyone takes seriously, and yet the field is enormous and professional.
Maybe agentic AI follows that path. Standards bodies for the protocols, conferences for the community, open source for the reputation building, and portfolios for the credentials. No central association required.
The counterargument is that agentic AI raises safety and reliability questions that web development doesn't. If my website has a bug, someone sees a broken layout. If my agent has a bug, it might authorize a fraudulent transaction or expose sensitive data. The stakes are higher, and that might create demand for more formal professional standards.
It's probably why the Linux Foundation's OAS charter explicitly mentions safety and security. They're not just building standards for interoperability. They're building standards for responsible deployment. That's the kind of thing that eventually requires some form of accountability mechanism, and professional certification is one way to provide that.
Let me talk about something that's been happening quietly but I think is significant. Several universities are starting to offer courses on agentic AI development. Not just machine learning in general, but specifically agent architectures, tool use, multi-agent systems. Stanford had a workshop on this last quarter. MIT has a research group focused on agent safety. The academic interest is growing.
Academia is one of the slowest-moving parts of the professionalization ecosystem, but also one of the most important. When universities start teaching something, it signals that the field has enough stability and depth to support a curriculum. It also creates a pipeline of people with formal training, which in turn creates demand for professional recognition.
The curriculum question is actually fascinating right now because nobody agrees on what should be in it. Is agentic AI a subfield of software engineering? Is it a subfield of AI research? Is it its own thing? Different universities are answering differently, and the answer will shape what the profession looks like in ten years.
My instinct is that it's closer to software engineering than to AI research, at least for the builder community Daniel's talking about. The skills that matter are system design, API integration, error handling, testing, deployment. Those are engineering skills. The fact that the system includes an LLM is almost incidental to the architectural challenges.
I mostly agree, but I think there's a dimension of prompt engineering and agent behavior design that's new. Understanding how to write system prompts that produce reliable agent behavior, understanding failure modes of LLMs in tool-use scenarios, understanding how to evaluate agent performance — these aren't traditional software engineering skills. They're something adjacent.
So maybe the curriculum ends up being a hybrid — software engineering fundamentals plus a new layer of agent-specific design patterns. That would actually make for a coherent professional identity. "Agent engineer" as someone who understands distributed systems and LLM behavior and protocol design.
That job title is starting to appear. I've seen "Agent Engineer" and "AI Agent Developer" in job postings over the past six months. They're still rare, but they exist. The fact that recruiters are creating these categories suggests the labor market is beginning to recognize the specialization.
Which brings us back to Daniel's core concern. He's building expertise in something that doesn't yet have a clear professional identity, and he's looking for the scaffolding — the community, the standards, the career path — that will turn his individual learning into a recognized capability. And the honest answer is that the scaffolding is being built right now, in real time, by the people who are showing up to the working groups and the hackathons and the Discord discussions.
If you're listening to this and thinking "I want in," the door is open. That's the remarkable thing about this moment. The field is still small enough that individual contributions matter. You can shape the standards. You can build the community. You can define the profession. Not in some abstract future — right now.
There's a phrase that gets overused, but I think it actually applies here. The future is unevenly distributed. The professional infrastructure for agentic AI already exists in pockets — in a working group here, a conference there, a Discord server somewhere else. Daniel's question is essentially "where are those pockets," and the answer is that they're findable and joinable.
They're going to grow. The Linux Foundation's involvement is a signal that the institutional money is arriving. Once the standards are formalized and the certifications exist, the professional identity will solidify quickly. The people who were involved early will have a significant advantage — not just in knowledge, but in reputation and network.
The practical advice, if we were to distill this down, is something like — join the MCP GitHub community, attend a conference if you can, start or join a local meetup, write about what you're building, and keep an eye on the Linux Foundation's OAS working group. The profession is being built. You can either watch it happen or help build it.
If you're based in Israel, don't underestimate the local scene. The Tel Aviv tech ecosystem is world-class for infrastructure and devtools. There are almost certainly other people in your area having the same questions and looking for the same community. Sometimes the best conference is the one you organize yourself.
Before we wrap, let me address one thing Daniel mentioned in passing that I think deserves more attention. He said "I get why it's not so reductionist" about the skills versus MCP debate. That's a genuine intellectual shift, and it's worth naming. The instinct to reduce everything to one tool or one approach is strong, especially in tech. Recognizing that MCP and skills serve different purposes and can coexist — that's the kind of nuanced thinking that marks the transition from enthusiast to professional.
It really is. Enthusiasts want the one true way. Professionals understand that different problems require different tools, and the skill is in knowing which to use when. The fact that Daniel arrived at that conclusion — and credited our discussion for pushing him there — is exactly the kind of professional maturation we're talking about.
Now: Hilbert's daily fun fact.
Hilbert: In the nineteen twenties, Ethiopian genna players in the Simpson Desert discovered that the curved wooden sticks used in the game, when polished with a specific resin from local acacia trees, would refract light at precisely the same angle as the morning sun cresting the dunes, creating a brief but reliable optical signal visible for up to three miles across the flat terrain.
I have so many questions, none of which I'm going to ask.
Three miles of stick-based optical signaling in a desert where nobody lives. Truly, the pinnacle of human ingenuity.
Thanks to our producer Hilbert Flumingtop for that contribution to our collective confusion. This has been My Weird Prompts. If you want more episodes, find us at myweirdprompts.com or wherever you get your podcasts. We'll be back soon.