The Nodes Have Judgment Now
← All posts

The Nodes Have Judgment Now

What 30 years of helping build the Internet taught me about the multi-agent AI revolution.

I’ve spent my career working with computing architectures that swing like a pendulum between centralization and distribution, and every swing taught me the same lesson.

From mainframes to client-server. From monolithic apps to microservices. From on-prem to cloud to serverless. Each cycle, we rediscover the same hard problems: fault tolerance, consistency, observability, coordination overhead. Always at a higher level of abstraction.

It’s happening again. And I’m convinced this time is the most consequential cycle yet.

The monolithic LLM era was our mainframe moment. GPT-4, Claude, Gemini as all-purpose, centralized reasoning engines: powerful, expensive, bottlenecked by a single context window. Now, just as happened with every previous architectural shift, we are decomposing the monolith into specialized, cooperating components.

The difference?

The nodes have judgment now.

The Pattern Is Unmistakable

If you’ve built distributed systems, the current multi-agent landscape looks eerily familiar:

EraArchitectureWhat Failed
1970sMainframeSingle point of failure, couldn’t scale
1990sClient-ServerTight coupling, cascading failures
2010sMicroservicesDistributed complexity, partial failure
2024Monolithic LLMContext limits, hallucination, no specialization
2026Multi-AgentCoordination overhead, semantic failures. But genuine emergent intelligence.

Every transition followed the same script: start with a powerful centralized system, hit its scaling limits, decompose into specialized distributed components, then spend years solving the coordination problems that decomposition creates.

At Rackspace, I watched this play out in real time. We built managed hosting for monolithic applications, then scrambled when the world went microservices. We co-launched OpenStack because the old models couldn’t keep pace with demand for elastic, distributed infrastructure. We helped start the Open Compute Project because even the hardware had to be rethought for scale. At Cloudflare, I saw serverless and edge computing push workloads to 300+ cities worldwide.

Each transition was painful, transformative, and ultimately inevitable. The current shift from monolithic LLMs to multi-agent systems follows the same script, but at a compressed timescale.

The Orchestration Wars Have Begun

The numbers tell the story. LangChain’s survey of 1,340 practitioners found 57% now have agents in production. Gartner predicts 40% of enterprise applications will feature AI agents by year-end, up from under 5% in 2025. They also reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025.

The real signal, though, is framework convergence. When every major cloud provider ships production-grade agent infrastructure in the same 12-month window, you’re not watching experimentation. You’re watching an architectural phase change.

Anthropic launched Claude Agent Teams. Microsoft unified AutoGen and Semantic Kernel. OpenAI productionized its Agents SDK. Google open-sourced the Agent Development Kit. AWS shipped Strands Agents and Bedrock AgentCore. In December 2025, Anthropic, OpenAI, and Block co-founded the Agentic AI Foundation under the Linux Foundation, with AWS, Google, Microsoft, Bloomberg, and Cloudflare as supporting members, governing MCP, A2A, and AGENTS.md as shared standards.

This is like watching HTTP, TCP/IP, and DNS coalesce in the early Internet. The experimental-framework era is over. The orchestration wars have begun.

Field Notes From the Front Lines

I started exAgentica because I kept seeing the same conversation: smart people treating multi-agent AI as a prompt engineering problem when it’s actually a distributed systems problem. The failure modes are familiar to anyone who lived through the microservices revolution. The solutions are too, if you know where to look.

This blog is my attempt to close that gap. Part history lesson, part war story, part map for what comes next. The infrastructure analogies run deeper than most people writing about AI have had the chance to live, and I think that perspective is worth something.

If you’re building with agents, deploying them, or making bets on where this goes, this is written for you.

The judgment is in the nodes. The intelligence is in the network. That was true of the Internet. The value was always in the connections, not the components. It’s true of multi-agent AI. I spent my career helping build that network. Now I’m fascinated by what happens when the nodes can think.

The judgment is in the nodes. The intelligence is in the network.

Next: the infrastructure patterns that directly apply to multi-agent AI, and where the analogy breaks down in ways that will get you in trouble.