Here's the fastest way to tell if someone actually builds with AI or just talks about it: ask them to explain the difference between MCP and A2A without saying "it depends."

Because right now, the two most important protocols in the agentic AI stack — Anthropic's Model Context Protocol and Google's Agent-to-Agent Protocol — are being treated like rivals in a cage match. Twitter is full of takes. LinkedIn is drowning in hot air. And meanwhile, the builders who actually understand how these protocols fit together are quietly assembling architectures that will eat everyone else's lunch.

The truth is dead simple: MCP is vertical. A2A is horizontal. You need both. MCP connects your agent to the world (tools, data, APIs). A2A connects your agents to each other. One gives your agent hands. The other gives your agents a shared language.

This week, we're going deep. No hand-waving. No "it depends." We're breaking down exactly what each protocol does, where each one shines, where each one falls short, and then — because we're builders, not bloggers — we're walking you through setting up both in a home lab, step by step.


What Is MCP (Model Context Protocol)?

MCP is Anthropic's open standard for connecting LLMs to external tools and data. Announced November 2024. Donated to the Linux Foundation's Agentic AI Foundation (AAIF) in December 2025, co-founded by Anthropic, Block, and OpenAI. It's now backed by AWS, Google, Microsoft, Cloudflare, and Bloomberg.

The numbers: 97 million monthly SDK downloads. Over 10,000 active servers. First-class client support in Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code.

Think of MCP as USB-C for AI. Before MCP, if you wanted your LLM to talk to GitHub, Slack, a database, and a file system, you needed four separate custom integrations. MCP collapses that into a single protocol. Build one MCP client, and it can talk to any MCP server. Build one MCP server for your tool, and any MCP-compatible AI can use it. The N×M problem becomes N+M.

Architecture:

graph LR
  LLM["🤖 LLM / Agent
(Claude, GPT)"] Client["MCP Client
(built-in)"] Server["MCP Server
(GitHub, Slack,
your database)"] LLM -->|"JSON-RPC"| Client -->|"stdio or HTTP/SSE"| Server style LLM fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Client fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Server fill:#1A1A2E,stroke:#E94560,color:#E8E8EC

MCP uses JSON-RPC 2.0 and supports two transport modes: stdio (local servers, great for dev) and Streamable HTTP with SSE (remote, production-ready). An MCP server exposes three core primitives:

  1. Tools — functions the agent can call (e.g., create_github_issue, query_database)
  2. Resources — data the agent can read (files, documents, API responses)
  3. Prompts — reusable prompt templates for common tasks

The November 2025 spec update was a big deal. It added asynchronous operations, OAuth 2.1 authorization, server identity verification, and a Tasks primitive for long-running workflows. MCP went from "toy protocol for demos" to "something you can actually build production systems on."


What Is A2A (Agent-to-Agent Protocol)?

A2A is Google's open standard for agent-to-agent communication. Launched April 2025 at Google Cloud Next with 50+ partners. Donated to the Linux Foundation in June 2025. Now at version 0.3 with 150+ supporting organizations.

If MCP is how an agent talks to tools, A2A is how agents talk to each other.

The core insight: agents aren't tools. Tools have structured inputs and outputs. Agents are autonomous — they reason, make decisions, and can handle ambiguity. When Agent A needs Agent B to do something, it shouldn't have to reduce Agent B to a function call. It should be able to say, "Hey, figure this out," and let Agent B do its thing.

graph LR
  Client["👤 Client Agent
(orchestrator or
user-facing)"] Remote["🔧 Remote Agent
(specialist)"] MCP["MCP Servers
(its tools)"] Client -- "Tasks, Messages
JSON-RPC/HTTP" --> Remote Remote -- "Artifacts, Status
Updates, SSE" --> Client Remote --> MCP style Client fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Remote fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style MCP fill:#0A0A0F,stroke:#8888A0,color:#8888A0,stroke-dasharray: 5 5

A2A has four core mechanisms:

  1. Agent Cards — JSON metadata files (served at /.well-known/agent.json) that describe what an agent can do. Think of it as a digital business card.
  2. Tasks — The core abstraction. Lifecycle: submitted → working → input-required → completed/failed/canceled.
  3. Artifacts — Structured outputs agents produce (documents, data, reports).
  4. Streaming — Real-time progress via Server-Sent Events.

Key design principle: Agents are opaque to each other. Agent A doesn't know Agent B's internal state. It only knows what Agent B advertises in its Agent Card. You can swap implementations without breaking the protocol.


When to use what (no hedging)

DimensionMCPA2A
Core ProblemAgent ↔ Tool/Data connectivityAgent ↔ Agent collaboration
DirectionVertical (agent reaches down to tools)Horizontal (agents coordinate across)
Created ByAnthropic (Nov 2024)Google (Apr 2025)
Governed ByLinux Foundation (AAIF)Linux Foundation
CommunicationJSON-RPC over stdio or HTTP/SSEJSON-RPC over HTTP/SSE + gRPC (v0.3)
DiscoveryClient knows server config upfrontAgent Cards at /.well-known/agent.json
AuthOAuth 2.1 (Nov 2025 spec)OpenAPI-compatible auth schemes
Best ForOne agent → many tools/dataMany agents → shared work
MaturityProduction-ready, massive ecosystemEarly but accelerating (v0.3)
Ecosystem97M monthly SDK downloads, 10K+ servers150+ orgs, growing SDKs

Use MCP when:

Use A2A when:

Use both when (this is the move):

A user-facing agent orchestrates specialist agents via A2A, and each specialist uses tools and data via MCP. Example: A hiring orchestrator (A2A) coordinates with a sourcing agent, an interview scheduler, and a background check agent — each of which connects to their own databases and APIs via MCP.

graph TB
  User["User / Application"]
  Orch["Orchestrator Agent
(A2A Client)"] subgraph A2A["A2A Layer — horizontal"] direction LR A["Agent A
(research)"] B["Agent B
(analysis)"] C["Agent C
(writing)"] end subgraph MCP["MCP Layer — vertical"] direction LR MA["PubMed
ArXiv"] MB["Database
Charts"] MC["Google Docs
Email"] end User --> Orch Orch --> A Orch --> B Orch --> C A --> MA B --> MB C --> MC style User fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style Orch fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style A fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style B fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style C fill:#1A1A2E,stroke:#E94560,color:#E8E8EC style MA fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style MB fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style MC fill:#0A0A0F,stroke:#8888A0,color:#8888A0 style A2A fill:none,stroke:#E94560,color:#E94560,stroke-dasharray: 5 5 style MCP fill:none,stroke:#8888A0,color:#8888A0,stroke-dasharray: 5 5

Because we don't gloss over stuff

MCP's Weaknesses:

A2A's Weaknesses:

The honest take: Start with MCP. It's more mature, has a massive ecosystem, and solves the problem 80% of builders face right now. Add A2A when you genuinely need multiple autonomous agents collaborating. For most solo builders, that point is 3-6 months away. For enterprises managing multiple AI vendors, it's now.


Home Lab Workbook

Here's where we get our hands dirty. Three architectures. One home lab. By the end, you'll have a working MCP-only setup AND an A2A+MCP setup, and you'll viscerally understand the difference.

Prerequisites

  • Linux, macOS, or Windows with WSL2
  • Python 3.11+ installed
  • Docker Desktop installed
  • VS Code recommended
  • API keys: Anthropic (Claude) or Google AI Studio (Gemini, free tier)
  • ~2 hours of uninterrupted time

Lab 1: MCP-Only Architecture

Scenario: You're building a personal assistant agent that can manage your files, search the web, and interact with a SQLite database. One agent, multiple tools.

Step 1: Set up the project

mkdir tng-mcp-lab && cd tng-mcp-lab
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install mcp anthropic httpx

Step 2: Create a simple MCP serversqlite_server.py:

import sqlite3
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent

server = Server("sqlite-assistant")

def init_db():
    conn = sqlite3.connect("lab.db")
    cursor = conn.cursor()
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS projects (
            id INTEGER PRIMARY KEY,
            name TEXT NOT NULL,
            status TEXT DEFAULT 'active',
            created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
        )
    """)
    cursor.execute("INSERT OR IGNORE INTO projects (id, name, status) "
                   "VALUES (1, 'MCP Lab', 'active')")
    cursor.execute("INSERT OR IGNORE INTO projects (id, name, status) "
                   "VALUES (2, 'A2A Experiment', 'planning')")
    conn.commit()
    conn.close()

@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="query_database",
            description="Run a read-only SQL query against the projects database",
            inputSchema={
                "type": "object",
                "properties": {
                    "sql": {"type": "string", "description": "SQL SELECT query"}
                },
                "required": ["sql"]
            }
        ),
        Tool(
            name="add_project",
            description="Add a new project to the database",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string"},
                    "status": {"type": "string"}
                },
                "required": ["name"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    conn = sqlite3.connect("lab.db")
    cursor = conn.cursor()

    if name == "query_database":
        sql = arguments["sql"]
        if not sql.strip().upper().startswith("SELECT"):
            return [TextContent(type="text", text="Error: Only SELECT queries allowed")]
        cursor.execute(sql)
        rows = cursor.fetchall()
        columns = [desc[0] for desc in cursor.description]
        results = [dict(zip(columns, row)) for row in rows]
        conn.close()
        return [TextContent(type="text", text=json.dumps(results, indent=2))]

    elif name == "add_project":
        cursor.execute(
            "INSERT INTO projects (name, status) VALUES (?, ?)",
            (arguments["name"], arguments.get("status", "active"))
        )
        conn.commit()
        pid = cursor.lastrowid
        conn.close()
        return [TextContent(type="text",
                text=f"Project '{arguments['name']}' created with ID {pid}")]

async def main():
    init_db()
    async with stdio_server() as (read, write):
        await server.run(read, write, server.create_initialization_options())

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

The full Lab 1 agent client, Lab 2 (A2A + MCP multi-agent), and Lab 3 (advanced dual-agent structured work channels) contain significantly more code. The complete lab files are available as companion downloads.

🧪

Lab 3: Advanced — Dual-Agent Structured Work Channels

The lab that separates builders from bloggers. Structured JSON between agents, natural language only for humans. 83% token savings. Full code included.

What you'll learn across the three labs

LabArchitectureWhen to Use
Lab 1 (MCP only)One agent → many tools80% of use cases. One smart agent that needs data and services. Start here.
Lab 2 (A2A + MCP)Many agents → each with own toolsDistinct specialist agents that collaborate. The orchestrator doesn't care how each works internally.
Lab 3 (Advanced)Structured work channels + human reportingToken efficiency at scale. Agents exchange structured JSON (cheap), expand to NL only for humans (expensive). The production pattern.

The progression path for builders

  1. Week 1: Build MCP servers for your most-used tools
  2. Week 2-3: Build a single agent that uses those MCP tools effectively
  3. Month 2: When you hit the ceiling, split into specialists
  4. Month 2+: Connect specialists via A2A, each keeping its own MCP connections
  5. Month 3+: Optimize with structured DataPart messaging between agents

Tools worth knowing

Docker MCP Toolkit — 200+ curated MCP servers, one-click install in Docker Desktop. Fastest way to get MCP running locally. Enable under Settings → Beta Features.

Google ADK to_a2a() — Wrap any existing ADK agent as an A2A server in one line of code. Fastest on-ramp to A2A.

Pydantic AI agent.to_a2a() — Same idea, different framework. Expose any Pydantic AI agent via A2A with one method call. Auto-generates the Agent Card.

A2A Inspector — Official debugging tool for validating A2A agents. Postman for agent communication.

MCP Inspector — Same concept for MCP servers. Test tool calls without a full LLM.


The scorecard

ClaimHype (1-10)Reality Check
"MCP is the USB-C of AI"7Directionally right, but USB-C didn't have MCP's security gaps. More like USB-C circa 2015.
"A2A will replace MCP"2Dead wrong. Different problems at different layers. A Twitter take, not a real position.
"You need A2A right now"4Only if you're building multi-agent enterprise systems today. Most builders should start MCP-first.
"MCP is production-ready"7The protocol is. Many implementations aren't. Security is the gap.
"These protocols = HTTP of agents"8This one's got legs. Both Linux Foundation-governed, vendor-neutral, backed by every major player.
"40% of enterprise apps will have agents by EOY 2026"6Ambitious but plausible. Protocols exist. Will exists. Execution is the bottleneck.

MCP gives your agents hands. A2A gives them a shared language. The builders who understand both — and know when to reach for each — are the ones building the infrastructure everyone else will run on.

Don't pick sides. Pick the right tool for the layer you're building.

We are the new guard.