Here's the fastest way to tell if someone actually builds with AI or just talks about it: ask them to explain the difference between MCP and A2A without saying "it depends."
Because right now, the two most important protocols in the agentic AI stack — Anthropic's Model Context Protocol and Google's Agent-to-Agent Protocol — are being treated like rivals in a cage match. Twitter is full of takes. LinkedIn is drowning in hot air. And meanwhile, the builders who actually understand how these protocols fit together are quietly assembling architectures that will eat everyone else's lunch.
The truth is dead simple: MCP is vertical. A2A is horizontal. You need both. MCP connects your agent to the world (tools, data, APIs). A2A connects your agents to each other. One gives your agent hands. The other gives your agents a shared language.
This week, we're going deep. No hand-waving. No "it depends." We're breaking down exactly what each protocol does, where each one shines, where each one falls short, and then — because we're builders, not bloggers — we're walking you through setting up both in a home lab, step by step.
What Is MCP (Model Context Protocol)?
MCP is Anthropic's open standard for connecting LLMs to external tools and data. Announced November 2024. Donated to the Linux Foundation's Agentic AI Foundation (AAIF) in December 2025, co-founded by Anthropic, Block, and OpenAI. It's now backed by AWS, Google, Microsoft, Cloudflare, and Bloomberg.
The numbers: 97 million monthly SDK downloads. Over 10,000 active servers. First-class client support in Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code.
Think of MCP as USB-C for AI. Before MCP, if you wanted your LLM to talk to GitHub, Slack, a database, and a file system, you needed four separate custom integrations. MCP collapses that into a single protocol. Build one MCP client, and it can talk to any MCP server. Build one MCP server for your tool, and any MCP-compatible AI can use it. The N×M problem becomes N+M.
Architecture:
graph LR LLM["🤖 LLM / Agent
(Claude, GPT)"] Client["MCP Client
(built-in)"] Server["MCP Server
(GitHub, Slack,
your database)"] LLM -->|"JSON-RPC"| Client -->|"stdio or HTTP/SSE"| Server style LLM fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Client fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Server fill:#1A1A2E,stroke:#E94560,color:#E8E8EC
MCP uses JSON-RPC 2.0 and supports two transport modes: stdio (local servers, great for dev) and Streamable HTTP with SSE (remote, production-ready). An MCP server exposes three core primitives:
- Tools — functions the agent can call (e.g.,
create_github_issue,query_database) - Resources — data the agent can read (files, documents, API responses)
- Prompts — reusable prompt templates for common tasks
The November 2025 spec update was a big deal. It added asynchronous operations, OAuth 2.1 authorization, server identity verification, and a Tasks primitive for long-running workflows. MCP went from "toy protocol for demos" to "something you can actually build production systems on."
What Is A2A (Agent-to-Agent Protocol)?
A2A is Google's open standard for agent-to-agent communication. Launched April 2025 at Google Cloud Next with 50+ partners. Donated to the Linux Foundation in June 2025. Now at version 0.3 with 150+ supporting organizations.
If MCP is how an agent talks to tools, A2A is how agents talk to each other.
The core insight: agents aren't tools. Tools have structured inputs and outputs. Agents are autonomous — they reason, make decisions, and can handle ambiguity. When Agent A needs Agent B to do something, it shouldn't have to reduce Agent B to a function call. It should be able to say, "Hey, figure this out," and let Agent B do its thing.
graph LR Client["👤 Client Agent
(orchestrator or
user-facing)"] Remote["🔧 Remote Agent
(specialist)"] MCP["MCP Servers
(its tools)"] Client -- "Tasks, Messages
JSON-RPC/HTTP" --> Remote Remote -- "Artifacts, Status
Updates, SSE" --> Client Remote --> MCP style Client fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Remote fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style MCP fill:#0A0A0F,stroke:#8888A0,color:#8888A0,stroke-dasharray: 5 5
A2A has four core mechanisms:
- Agent Cards — JSON metadata files (served at
/.well-known/agent.json) that describe what an agent can do. Think of it as a digital business card. - Tasks — The core abstraction. Lifecycle:
submitted → working → input-required → completed/failed/canceled. - Artifacts — Structured outputs agents produce (documents, data, reports).
- Streaming — Real-time progress via Server-Sent Events.
Key design principle: Agents are opaque to each other. Agent A doesn't know Agent B's internal state. It only knows what Agent B advertises in its Agent Card. You can swap implementations without breaking the protocol.
When to use what (no hedging)
| Dimension | MCP | A2A |
|---|---|---|
| Core Problem | Agent ↔ Tool/Data connectivity | Agent ↔ Agent collaboration |
| Direction | Vertical (agent reaches down to tools) | Horizontal (agents coordinate across) |
| Created By | Anthropic (Nov 2024) | Google (Apr 2025) |
| Governed By | Linux Foundation (AAIF) | Linux Foundation |
| Communication | JSON-RPC over stdio or HTTP/SSE | JSON-RPC over HTTP/SSE + gRPC (v0.3) |
| Discovery | Client knows server config upfront | Agent Cards at /.well-known/agent.json |
| Auth | OAuth 2.1 (Nov 2025 spec) | OpenAPI-compatible auth schemes |
| Best For | One agent → many tools/data | Many agents → shared work |
| Maturity | Production-ready, massive ecosystem | Early but accelerating (v0.3) |
| Ecosystem | 97M monthly SDK downloads, 10K+ servers | 150+ orgs, growing SDKs |
Use MCP when:
- Your agent needs to read a database, call an API, or interact with an external service
- You're building a single-agent system that needs access to many tools
- You want plug-and-play tool connectivity (thousands of servers already exist)
- You need something production-grade right now
Use A2A when:
- You have multiple specialized agents that need to collaborate
- Tasks are long-running and need status tracking (hours, days)
- Agents are built with different frameworks (LangGraph + CrewAI + ADK)
- You're building an enterprise "agent zoo" where discovery matters
Use both when (this is the move):
A user-facing agent orchestrates specialist agents via A2A, and each specialist uses tools and data via MCP. Example: A hiring orchestrator (A2A) coordinates with a sourcing agent, an interview scheduler, and a background check agent — each of which connects to their own databases and APIs via MCP.
graph TB User["User / Application"] Orch["Orchestrator Agent
(A2A Client)"] subgraph A2A["A2A Layer — horizontal"] direction LR A["Agent A
(research)"] B["Agent B
(analysis)"] C["Agent C
(writing)"] end subgraph MCP["MCP Layer — vertical"] direction LR MA["PubMed
ArXiv"] MB["Database
Charts"] MC["Google Docs
Email"] end User --> Orch Orch --> A Orch --> B Orch --> C A --> MA B --> MB C --> MC style User fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Orch fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style A fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style B fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style C fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style MA fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style MB fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style MC fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style A2A fill:none,stroke:#E94560,color:#E94560,stroke-dasharray: 5 5 style MCP fill:none,stroke:#8888A0,color:#8888A0,stroke-dasharray: 5 5
Because we don't gloss over stuff
MCP's Weaknesses:
- Security is still a real concern. A July 2025 scan of ~2,000 MCP servers found all of them lacked authentication. The spec is improving (OAuth 2.1), but implementation lags.
- Over-permissioning is rampant. Replit's AI agent deleted a production database in July 2025 despite explicit instructions not to — MCP gave it the access, and the guardrails weren't there.
- Tool overload. Too many MCP servers connected to one LLM causes confusion — the model picks the wrong tool or hallucinates tool calls. The #1 complaint in production.
- No native agent-to-agent capability. If you force agent collaboration through MCP, you're squeezing agents into tool-shaped boxes.
A2A's Weaknesses:
- Still early. Version 0.3 is usable but the spec is evolving. Breaking changes expected.
- Smaller ecosystem. Compared to MCP's 10,000+ servers, A2A's tooling is nascent.
- Latency overhead. Cross-agent orchestration adds round trips. For simple workflows, MCP alone is faster.
- Complexity tax. If you only have one agent, A2A is overkill. Don't architecture-astronaut yourself.
- Google-heavy origins. Reference implementations lean on Google ADK and Gemini. Non-Google pathways exist but are less polished.
The honest take: Start with MCP. It's more mature, has a massive ecosystem, and solves the problem 80% of builders face right now. Add A2A when you genuinely need multiple autonomous agents collaborating. For most solo builders, that point is 3-6 months away. For enterprises managing multiple AI vendors, it's now.
Home Lab Workbook
Here's where we get our hands dirty. Three architectures. One home lab. By the end, you'll have a working MCP-only setup AND an A2A+MCP setup, and you'll viscerally understand the difference.
Prerequisites
- Linux, macOS, or Windows with WSL2
- Python 3.11+ installed
- Docker Desktop installed
- VS Code recommended
- API keys: Anthropic (Claude) or Google AI Studio (Gemini, free tier)
- ~2 hours of uninterrupted time
Lab 1: MCP-Only Architecture
Scenario: You're building a personal assistant agent that can manage your files, search the web, and interact with a SQLite database. One agent, multiple tools.
Step 1: Set up the project
mkdir tng-mcp-lab && cd tng-mcp-lab
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install mcp anthropic httpx
Step 2: Create a simple MCP server — sqlite_server.py:
import sqlite3
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
server = Server("sqlite-assistant")
def init_db():
conn = sqlite3.connect("lab.db")
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS projects (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
status TEXT DEFAULT 'active',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
""")
cursor.execute("INSERT OR IGNORE INTO projects (id, name, status) "
"VALUES (1, 'MCP Lab', 'active')")
cursor.execute("INSERT OR IGNORE INTO projects (id, name, status) "
"VALUES (2, 'A2A Experiment', 'planning')")
conn.commit()
conn.close()
@server.list_tools()
async def list_tools():
return [
Tool(
name="query_database",
description="Run a read-only SQL query against the projects database",
inputSchema={
"type": "object",
"properties": {
"sql": {"type": "string", "description": "SQL SELECT query"}
},
"required": ["sql"]
}
),
Tool(
name="add_project",
description="Add a new project to the database",
inputSchema={
"type": "object",
"properties": {
"name": {"type": "string"},
"status": {"type": "string"}
},
"required": ["name"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
conn = sqlite3.connect("lab.db")
cursor = conn.cursor()
if name == "query_database":
sql = arguments["sql"]
if not sql.strip().upper().startswith("SELECT"):
return [TextContent(type="text", text="Error: Only SELECT queries allowed")]
cursor.execute(sql)
rows = cursor.fetchall()
columns = [desc[0] for desc in cursor.description]
results = [dict(zip(columns, row)) for row in rows]
conn.close()
return [TextContent(type="text", text=json.dumps(results, indent=2))]
elif name == "add_project":
cursor.execute(
"INSERT INTO projects (name, status) VALUES (?, ?)",
(arguments["name"], arguments.get("status", "active"))
)
conn.commit()
pid = cursor.lastrowid
conn.close()
return [TextContent(type="text",
text=f"Project '{arguments['name']}' created with ID {pid}")]
async def main():
init_db()
async with stdio_server() as (read, write):
await server.run(read, write, server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
The full Lab 1 agent client, Lab 2 (A2A + MCP multi-agent), and Lab 3 (advanced dual-agent structured work channels) contain significantly more code. The complete lab files are available as companion downloads.
Lab 3: Advanced — Dual-Agent Structured Work Channels
The lab that separates builders from bloggers. Structured JSON between agents, natural language only for humans. 83% token savings. Full code included.
What you'll learn across the three labs
| Lab | Architecture | When to Use |
|---|---|---|
| Lab 1 (MCP only) | One agent → many tools | 80% of use cases. One smart agent that needs data and services. Start here. |
| Lab 2 (A2A + MCP) | Many agents → each with own tools | Distinct specialist agents that collaborate. The orchestrator doesn't care how each works internally. |
| Lab 3 (Advanced) | Structured work channels + human reporting | Token efficiency at scale. Agents exchange structured JSON (cheap), expand to NL only for humans (expensive). The production pattern. |
The progression path for builders
- Week 1: Build MCP servers for your most-used tools
- Week 2-3: Build a single agent that uses those MCP tools effectively
- Month 2: When you hit the ceiling, split into specialists
- Month 2+: Connect specialists via A2A, each keeping its own MCP connections
- Month 3+: Optimize with structured DataPart messaging between agents
Tools worth knowing
The scorecard
| Claim | Hype (1-10) | Reality Check |
|---|---|---|
| "MCP is the USB-C of AI" | 7 | Directionally right, but USB-C didn't have MCP's security gaps. More like USB-C circa 2015. |
| "A2A will replace MCP" | 2 | Dead wrong. Different problems at different layers. A Twitter take, not a real position. |
| "You need A2A right now" | 4 | Only if you're building multi-agent enterprise systems today. Most builders should start MCP-first. |
| "MCP is production-ready" | 7 | The protocol is. Many implementations aren't. Security is the gap. |
| "These protocols = HTTP of agents" | 8 | This one's got legs. Both Linux Foundation-governed, vendor-neutral, backed by every major player. |
| "40% of enterprise apps will have agents by EOY 2026" | 6 | Ambitious but plausible. Protocols exist. Will exists. Execution is the bottleneck. |
MCP gives your agents hands. A2A gives them a shared language. The builders who understand both — and know when to reach for each — are the ones building the infrastructure everyone else will run on.
Don't pick sides. Pick the right tool for the layer you're building.
We are the new guard.