What Is the A2A Protocol?
Agent-to-Agent communication explained — how AI agents talk to each other
Last updated: March 2026
The A2A (Agent-to-Agent) protocol is an open standard created by Google that lets AI agents communicate with each other to complete complex tasks. Where MCP connects AI models to tools (like a USB port), A2A connects AI agents to other AI agents (like email). With 50+ founding partners including Salesforce, SAP, and Atlassian, A2A is becoming the standard for multi-agent coordination.
In April 2025, Google introduced the Agent-to-Agent (A2A) protocol alongside an alliance of over 50 technology companies. The premise was straightforward: as AI agents become more capable and specialised, they need a standard way to find each other, negotiate capabilities, delegate tasks, and exchange results. A2A is that standard.
Before A2A, multi-agent systems were proprietary. If you built an agent with LangChain and wanted it to talk to an agent built with CrewAI, you had to write custom integration code. Every framework had its own communication protocol, its own task format, its own way of describing agent capabilities. A2A replaces all of that with a single, open, framework-agnostic standard.
This guide covers everything you need to understand and implement A2A: what it is, why it exists, how it compares to MCP, the full protocol specification, the AgentCard format, the task lifecycle, complete implementation code, and where the ecosystem is headed in 2026.
What A2A Is — In Plain English
A2A is a communication protocol for AI agents. It defines how one AI agent discovers another, understands what it can do, sends it a task, and receives a result. That is the entire scope of the protocol.
The best analogy is email. Before email, if two people in different organisations wanted to collaborate, they needed a shared system or a physical messenger. Email standardised the format (headers, body, attachments), the transport (SMTP), and the discovery (MX records, DNS). A2A does the same thing for AI agents.
With A2A:
- Discovery: An agent publishes an AgentCard at
.well-known/agent.jsondescribing its skills, capabilities, and endpoint. Other agents find it through registries or direct URL. - Negotiation: The requesting agent reads the AgentCard to understand what the remote agent can do, what input formats it accepts, and what authentication is required.
- Task delegation: The requesting agent sends a JSON-RPC 2.0 request to the remote agent's endpoint, containing a task with a message. The message includes text, files, or structured data.
- Execution: The remote agent processes the task. It may complete it immediately, ask for more information, or stream partial results.
- Result: The remote agent returns artifacts — the output of the task, which can be text, data, images, or files.
A2A is designed to be opaque. The requesting agent does not need to know how the remote agent works internally. It might be powered by GPT-4, Gemini, Claude, or a custom model. It might use LangChain, CrewAI, AutoGen, or bare code. None of that matters. A2A only defines the interface between agents, not the implementation.
Why A2A Exists
Three convergent trends created the need for A2A:
Agents are becoming specialised
The monolithic "one AI does everything" model is giving way to specialised agents. A code review agent is better at code review than a general-purpose assistant. A legal compliance agent knows contract law better than ChatGPT. A financial analysis agent can process SEC filings faster than any generalist. As agents specialise, they need to collaborate — a project management agent might delegate code review to a code agent, legal review to a legal agent, and status updates to a communications agent.
Agents are built on different frameworks
The AI agent ecosystem is fragmented. Google has Agent Development Kit (ADK), Microsoft has AutoGen, the open-source community has LangChain, CrewAI, LlamaIndex, and dozens more. Enterprise vendors have their own agent platforms (Salesforce Agentforce, ServiceNow AI Agents, SAP Joule). Without a shared protocol, agents from different frameworks cannot communicate. A2A is the Esperanto that lets them all talk to each other.
The web needs an agent layer
The current web is built for human users navigating between pages. The emerging agentic web adds a parallel layer where AI agents navigate between services. Just as HTTP standardised how browsers talk to servers, A2A standardises how agents talk to agents. Without it, every agent-to-agent integration is a bespoke API integration — expensive, fragile, and impossible to scale.
MCP vs A2A — The Complete Comparison
The most common question about A2A is how it relates to MCP (Model Context Protocol). The short answer: they solve different problems and are designed to work together. The longer answer requires understanding the fundamental difference between tools and agents.
MCP connects models to tools
MCP is like a USB port for AI. It lets an AI model connect to external tools and use them: query a database, send an email, read a file, call an API. The interaction is synchronous and function-like: the model calls a tool with inputs and gets back outputs. The tool is not intelligent — it does not make decisions, it does not have memory, it does not ask clarifying questions. It executes a defined function and returns a result.
A2A connects agents to agents
A2A is like email for AI. It lets an AI agent communicate with another AI agent. The remote agent is intelligent — it interprets the task, makes decisions about how to complete it, may ask for clarification, and produces a thoughtful response. The interaction can be asynchronous, multi-turn, and long-running. An A2A task might take seconds or days.
| Dimension | MCP (Model Context Protocol) | A2A (Agent-to-Agent) |
|---|---|---|
| Analogy | USB port | |
| Connects | Model to tool | Agent to agent |
| Remote side | Deterministic function | Intelligent agent |
| Interaction | Synchronous (call/response) | Async, multi-turn, streaming |
| Intelligence | No intelligence on the tool side | Full AI on both sides |
| State | Stateless per call | Stateful task lifecycle |
| Discovery | mcp.json manifest | .well-known/agent.json (AgentCard) |
| Protocol | HTTP + JSON (or stdio) | JSON-RPC 2.0 over HTTP |
| Created by | Anthropic | |
| Use case | Access data, execute actions | Delegate complex tasks, collaborate |
| Example | "Search the database for X" | "Research this topic and write a report" |
| Negotiation | None (fixed schema) | Input-required state for clarification |
| Multi-modal | Limited | Text, images, files, structured data |
How they work together
In a production multi-agent system, MCP and A2A serve different layers of the architecture:
Internal connections (MCP): Each agent uses MCP to connect to its own tools. A travel agent uses MCP to access its flight search API, hotel database, and payment processor. A coding agent uses MCP to read files, run tests, and commit code.
External connections (A2A): Agents use A2A to communicate with each other. The travel agent uses A2A to delegate hotel search to a hotel-specialist agent, car rental to a car agent, and itinerary formatting to a content agent.
This separation is by design. Google and Anthropic coordinated on the protocol boundaries. MCP handles the "vertical" connection (agent to its tools), A2A handles the "horizontal" connection (agent to agent). Together, they form the complete communication layer of the agentic web.
A2A Founding Partners
A2A launched with an unusually broad coalition of supporters. The 50+ founding partners span enterprise software, consulting, AI infrastructure, and developer tools:
The breadth of this coalition is significant. When Salesforce, SAP, and ServiceNow — who collectively serve most of the enterprise market — all commit to the same agent communication standard, it becomes the de facto standard. Enterprise buyers will demand A2A compatibility because their existing vendors support it. This is how standards win: not through technical elegance alone, but through ecosystem adoption.
Notably, Anthropic (creators of Claude and MCP) is not a founding partner, but this is not adversarial. Google and Anthropic have publicly acknowledged that MCP and A2A are complementary. Anthropic has indicated support for agent-to-agent interoperability. The practical expectation is that Claude will support A2A communication in the same way it already supports MCP tool calling.
The AgentCard — A Business Card for AI Agents
The AgentCard is the foundation of A2A discovery. It is a JSON file served at .well-known/agent.json that tells other agents everything they need to know to interact with your agent. Without an AgentCard, your agent is invisible to the A2A network.
Here is a complete AgentCard example:
{
"name": "p0stman AI Agent",
"description": "AI product studio agent. Can answer questions about AI development services, case studies, agentic web implementation, and project enquiries.",
"url": "https://p0stman.com/api/agent",
"provider": {
"organization": "p0stman",
"url": "https://p0stman.com"
},
"version": "1.0.0",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"stateTransitionHistory": false
},
"authentication": {
"schemes": ["bearer"]
},
"defaultInputModes": ["text"],
"defaultOutputModes": ["text"],
"skills": [
{
"id": "service-enquiry",
"name": "Service Enquiry",
"description": "Answer questions about p0stman's AI development services, pricing, process, and capabilities",
"tags": ["ai", "development", "agency", "consulting"],
"examples": [
"What services does p0stman offer?",
"Can you build an AI agent for my business?",
"What does fractional AI leadership mean?"
]
},
{
"id": "agentic-web-guidance",
"name": "Agentic Web Guidance",
"description": "Provide guidance on implementing the agentic web stack: MCP servers, A2A endpoints, llms.txt, agent discovery",
"tags": ["agentic-web", "mcp", "a2a", "llms-txt"],
"examples": [
"How do I make my website visible to AI agents?",
"What is an MCP server and how do I build one?",
"Explain the A2A protocol"
]
},
{
"id": "case-study-lookup",
"name": "Case Study Lookup",
"description": "Share relevant case studies and portfolio examples based on the requester's industry or use case",
"tags": ["portfolio", "case-studies", "examples"],
"examples": [
"Show me examples of AI agent projects",
"Do you have experience with SaaS platforms?",
"What have you built for the marine industry?"
]
}
]
}
Every field in the AgentCard serves a specific purpose in the discovery and negotiation process:
| Field | Purpose | Example |
|---|---|---|
| name | Human-readable agent name | "p0stman AI Agent" |
| description | What the agent does (for other agents to evaluate) | "AI product studio agent..." |
| url | The A2A endpoint where tasks are sent | "https://p0stman.com/api/agent" |
| provider | Organisation that operates the agent | {"organization": "p0stman"} |
| version | AgentCard version for compatibility | "1.0.0" |
| capabilities | Protocol features supported | streaming, pushNotifications |
| authentication | How to authenticate requests | bearer, oauth2 |
| defaultInputModes | What the agent accepts | ["text"], ["text", "image"] |
| defaultOutputModes | What the agent returns | ["text"], ["text", "file"] |
| skills | Specific things the agent can do | Array of skill objects |
The skills array is the most important part of the AgentCard. Each skill has an ID, name, description, tags, and example prompts. When a requesting agent evaluates whether your agent can help with a task, it matches the task description against your skill descriptions and tags. Clear, specific skill descriptions lead to better matches and more relevant task delegations.
The A2A Task Lifecycle
An A2A task moves through a defined lifecycle with well-specified states. Understanding this lifecycle is essential for implementing a robust A2A endpoint.
Task states
| State | Meaning | Transitions to |
|---|---|---|
| submitted | Task received and acknowledged | working, failed, canceled |
| working | Agent is actively processing the task | completed, input-required, failed, canceled |
| input-required | Agent needs more information from the requester | working (after input received), canceled |
| completed | Task finished successfully, artifacts available | (terminal) |
| failed | Task could not be completed | (terminal) |
| canceled | Task was abandoned by either party | (terminal) |
Sending a task
Tasks are sent via HTTP POST to the A2A endpoint using JSON-RPC 2.0 format:
POST https://p0stman.com/api/agent
Content-Type: application/json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": "task-abc-123",
"message": {
"role": "user",
"parts": [
{
"type": "text",
"text": "We need an AI agent built for our customer service team. We handle 500 tickets per day across email and chat. Can you help, and what would the timeline and cost look like?"
}
]
}
}
}
Receiving a response
The response includes the task status and any artifacts (results):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "task-abc-123",
"status": {
"state": "completed"
},
"artifacts": [
{
"parts": [
{
"type": "text",
"text": "Yes, we build AI customer service agents. For 500 tickets/day across email and chat, I'd recommend a multi-channel agent with intent classification, automated responses for common queries, and human handoff for complex issues.\n\nTimeline: 4-6 weeks from kickoff to production.\nInvestment: £12,000-£18,000 depending on integration complexity.\n\nWe've built similar systems — see our case studies at p0stman.com/case-studies. Want to schedule a discovery call? Contact us at hello@p0stman.com or through p0stman.com/contact."
}
]
}
]
}
}
Multi-turn conversations (input-required)
Not every task can be completed in a single exchange. The input-required state enables multi-turn conversations:
// Agent's response when it needs more information:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"id": "task-abc-123",
"status": {
"state": "input-required"
},
"artifacts": [
{
"parts": [
{
"type": "text",
"text": "I can help with that. To give you an accurate estimate, I need to know: 1) What channels (email, live chat, WhatsApp, social)? 2) Do you need multilingual support? 3) What CRM or ticketing system are you using currently?"
}
]
}
]
}
}
// Requester sends follow-up on the same task ID:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tasks/send",
"params": {
"id": "task-abc-123",
"message": {
"role": "user",
"parts": [
{
"type": "text",
"text": "Email and live chat only. English and Spanish. We use Zendesk."
}
]
}
}
}
The multi-turn capability is what makes A2A fundamentally different from a simple API call. The agents are having a conversation, with each side contributing context and intelligence. This maps naturally to how human collaboration works — a request, a clarification, and then a response.
Streaming and push notifications
For long-running tasks, A2A supports two additional patterns:
Streaming (via Server-Sent Events) allows the agent to send partial results as it works. For example, a research agent might stream intermediate findings before the final report is complete. The requesting agent can display these to the user in real-time.
Push notifications allow agents to notify the requester when a task state changes. This is useful for tasks that take hours or days — instead of polling the endpoint, the requesting agent receives a webhook when the task is complete.
Both capabilities are declared in the AgentCard's capabilities object. If an agent doesn't support streaming or push notifications, the requester falls back to synchronous request/response.
Building an A2A Endpoint in Next.js
A complete A2A endpoint needs three HTTP handlers: GET (return the AgentCard for discovery), OPTIONS (CORS preflight for cross-origin agents), and POST (process tasks). Here is a production-ready implementation in TypeScript for Next.js App Router.
// app/api/agent/route.ts
import { NextRequest, NextResponse } from "next/server";
// Your AgentCard — the identity of your agent
const AGENT_CARD = {
name: "Your Agent Name",
description: "What your agent does",
url: "https://yourdomain.com/api/agent",
provider: {
organization: "Your Org",
url: "https://yourdomain.com"
},
version: "1.0.0",
capabilities: {
streaming: false,
pushNotifications: false
},
authentication: {
schemes: ["bearer"]
},
defaultInputModes: ["text"],
defaultOutputModes: ["text"],
skills: [
{
id: "general-enquiry",
name: "General Enquiry",
description: "Answer questions about the business",
tags: ["enquiry", "info"],
examples: ["What do you do?", "Tell me about your services"]
}
]
};
// CORS headers for cross-origin agent requests
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
};
// GET: Return the AgentCard for discovery
export async function GET() {
return NextResponse.json(AGENT_CARD, { headers: corsHeaders });
}
// OPTIONS: CORS preflight
export async function OPTIONS() {
return new NextResponse(null, { status: 204, headers: corsHeaders });
}
// POST: Process A2A tasks
export async function POST(req: NextRequest) {
try {
const body = await req.json();
// Validate JSON-RPC 2.0 format
if (body.jsonrpc !== "2.0" || !body.method || !body.params) {
return NextResponse.json({
jsonrpc: "2.0",
id: body.id || null,
error: { code: -32600, message: "Invalid request" }
}, { status: 400, headers: corsHeaders });
}
// Handle tasks/send method
if (body.method === "tasks/send") {
const { id: taskId, message } = body.params;
const taskText = message?.parts
?.filter((p: any) => p.type === "text")
.map((p: any) => p.text)
.join("\n") || "";
// Process the task with your AI model
const responseText = await processTask(taskText);
return NextResponse.json({
jsonrpc: "2.0",
id: body.id,
result: {
id: taskId,
status: { state: "completed" },
artifacts: [{
parts: [{ type: "text", text: responseText }]
}]
}
}, { headers: corsHeaders });
}
// Unknown method
return NextResponse.json({
jsonrpc: "2.0",
id: body.id,
error: { code: -32601, message: "Method not found" }
}, { status: 404, headers: corsHeaders });
} catch (error) {
return NextResponse.json({
jsonrpc: "2.0",
id: null,
error: { code: -32603, message: "Internal error" }
}, { status: 500, headers: corsHeaders });
}
}
// Your task processing logic (replace with your AI model)
async function processTask(taskText: string): Promise<string> {
// Option 1: Call an LLM API (Gemini, OpenAI, Claude)
// Option 2: Use a RAG pipeline with your business knowledge
// Option 3: Simple pattern matching for common queries
// Example using Gemini:
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=${process.env.GEMINI_API_KEY}`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
systemInstruction: {
parts: [{
text: `You are an AI agent for [Your Business].
Answer questions about services, pricing, and capabilities.
Be helpful, specific, and professional.`
}]
},
contents: [{
parts: [{ text: taskText }]
}]
})
}
);
const data = await response.json();
return data.candidates?.[0]?.content?.parts?.[0]?.text
|| "I apologize, I could not process that request.";
}
This implementation handles the three core A2A operations in under 120 lines. The processTask function is where you plug in your AI model and business logic. Everything else is protocol handling.
To complete the setup, place your AgentCard at the well-known path. In Next.js, create public/.well-known/agent.json with the same JSON as the AGENT_CARD constant. The GET handler on your /api/agent route also returns it for direct access, but the well-known path is the standard discovery location.
Agent Discovery via .well-known/agent.json
The .well-known/ directory is a web standard (RFC 8615) for machine-readable metadata. You may know it from .well-known/apple-app-site-association (iOS universal links) or .well-known/openid-configuration (OAuth discovery). A2A uses the same convention.
When an agent or agent registry wants to discover your agent, it fetches:
GET https://yourdomain.com/.well-known/agent.json
The response must be valid JSON with a Content-Type: application/json header. For Next.js projects, place the file at public/.well-known/agent.json. You may also need to update your middleware or next.config to ensure the .well-known path is not blocked by authentication middleware or rewrite rules.
Agent registries: As of March 2026, several A2A agent registries have launched, including a2aregistry.org. These registries crawl .well-known/agent.json files and index agents by skill, industry, and capability. Registering your agent in these directories increases discoverability by other agents, similar to how submitting your sitemap to Google increases search visibility.
A2A Security: Authentication, Authorization, and Trust
Security in A2A is critical because you are exposing an AI agent that can process arbitrary requests and potentially execute actions. Without proper security, your A2A endpoint is an open door for abuse.
Authentication
The AgentCard declares which authentication schemes your endpoint requires. The most common options:
- Bearer token: The requesting agent includes an API key in the
Authorization: Bearer <token>header. Simple to implement, suitable for server-to-server communication. - OAuth 2.0: The requesting agent obtains an access token through the OAuth flow. More complex but better for scenarios where different agents need different permission levels.
- None (public): No authentication required. Suitable for agents that only provide public information (like answering "What does your company do?"). Risky for agents that can execute actions.
A practical approach is to offer a hybrid: allow unauthenticated access for read-only, informational skills, and require authentication for skills that trigger actions (submissions, bookings, data access).
Authorization
Authentication proves who the requester is. Authorization determines what they can do. In A2A, authorization typically maps to skills: an authenticated agent might have access to "service-enquiry" and "case-study-lookup" skills but not to "submit-order" or "access-client-data" skills.
Implement authorization by checking the authenticated identity against a permission map before processing a task. If the requested skill requires elevated permissions that the requester doesn't have, return an error response.
Rate limiting
Every A2A endpoint must implement rate limiting. Without it, a single misbehaving agent can overwhelm your system. Standard approaches:
- Limit by IP address (for unauthenticated requests): 10-20 requests per minute
- Limit by API key (for authenticated requests): 60-100 requests per minute
- Limit by task complexity: long-running tasks consume more quota
- Return
429 Too Many Requestswith aRetry-Afterheader when limits are exceeded
Logging and auditing
Log every A2A interaction to a database. At minimum, record: task ID, requesting agent's identity, task text, response text, timestamp, and processing time. This audit trail is essential for debugging, usage tracking, and identifying abuse patterns. We recommend a dedicated agent_sessions table:
CREATE TABLE agent_sessions (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
task_id text,
user_agent text,
task_text text,
response_text text,
source text,
status text DEFAULT 'completed',
processing_ms integer,
created_at timestamptz DEFAULT now()
);
CREATE INDEX idx_agent_sessions_created_at
ON agent_sessions (created_at DESC);
Real-World A2A Use Cases
A2A's value becomes clear when you look at workflows that require multiple specialised agents working together.
Enterprise workflow automation
A project management agent (built on Atlassian Jira) receives a request to onboard a new employee. It delegates subtasks via A2A: an IT agent (ServiceNow) provisions accounts and equipment, an HR agent (Workday) processes documentation and benefits enrollment, a facilities agent reserves a desk and parking, and a training agent (custom) enrolls the employee in required courses. Each agent processes its task independently, reports completion, and the PM agent tracks overall progress.
Without A2A, this workflow requires custom integrations between every pair of systems. With A2A, each agent exposes a standard interface. Adding a new step (e.g., a security training agent) is as simple as adding a new A2A call.
Travel planning
A personal assistant agent receives: "Plan a 5-day trip to Tokyo for two people in April. We like food, architecture, and want to avoid tourist traps." The assistant delegates via A2A:
- A flight agent searches routes, compares prices, and returns options
- A hotel agent finds accommodation matching budget and preferences
- A local experience agent (powered by a Tokyo-based service) suggests restaurants, walking routes, and architecture tours
- A visa/travel requirements agent checks passport, visa needs, and health requirements
The assistant synthesises all responses into a complete itinerary. If the flight agent returns options that conflict with hotel check-in times, the assistant negotiates between agents to resolve the conflict. This multi-agent coordination is what A2A enables.
Multi-vendor procurement
A procurement agent needs to find and evaluate vendors for a software development project. It queries multiple vendor agents via A2A: "We need a team to build a React Native mobile app with AI features. Budget: $50-80k. Timeline: 3 months." Each vendor agent responds with relevant capabilities, case studies, availability, and pricing. The procurement agent compares responses, shortlists candidates, and presents recommendations to the human decision-maker.
This is the scenario that makes A2A directly relevant to businesses like p0stman.com. When procurement agents start querying vendor agents via A2A, having a well-configured AgentCard with clear skills and a responsive A2A endpoint becomes a competitive advantage.
Software development
A lead development agent receives a feature request. It delegates via A2A: a design agent creates UI mockups, a backend agent implements the API, a frontend agent builds the components, a testing agent writes and runs tests, and a security agent scans for vulnerabilities. Each agent specialises in its domain and uses MCP internally to access code repositories, CI/CD pipelines, and deployment infrastructure. The coordination between agents is A2A; the tool access within each agent is MCP.
The A2A Ecosystem in 2026
A2A is less than a year old, but the ecosystem is developing rapidly. Here is the current landscape.
Agent registries
Several registries have launched for discovering A2A-compatible agents. a2aregistry.org is the most established, offering search by skill, industry, and geography. Enterprise platforms like Salesforce and ServiceNow maintain their own internal agent registries for corporate deployments. The registry landscape is still fragmented, but consolidation is expected as the protocol matures.
Framework support
Most major agent frameworks now support A2A either natively or via plugins:
| Framework | A2A Support | Notes |
|---|---|---|
| Google ADK | Native | First-class A2A support as expected from the protocol creator |
| LangChain / LangGraph | Native | LangChain was a founding partner; A2A is integrated into LangGraph agent loops |
| CrewAI | Plugin | A2A connector available for inter-crew communication |
| AutoGen (Microsoft) | Plugin | Community-maintained A2A bridge |
| Salesforce Agentforce | Native | Enterprise A2A for Salesforce ecosystem agents |
| ServiceNow AI Agents | Native | A2A enabled for IT service management agents |
| Custom (Next.js, etc.) | Manual | ~120 lines of code for a complete endpoint (as shown above) |
Enterprise adoption
Enterprise adoption is accelerating. Deloitte, Accenture, and McKinsey — all founding partners — are building A2A into their AI consulting practices. This means their enterprise clients (Fortune 500 companies) are implementing A2A as part of their AI agent deployments. When a Deloitte engagement specifies A2A for inter-agent communication, it creates demand throughout the client's vendor ecosystem.
For SMEs and agencies, the implication is clear: enterprise buyers will increasingly expect A2A compatibility from their vendors. Having an AgentCard and A2A endpoint will transition from "innovative" to "expected" within 12-18 months.
How p0stman.com Implements A2A
p0stman.com runs a production A2A endpoint as a reference implementation. Here is how it works:
AgentCard at p0stman.com/.well-known/agent.json declares three skills: service enquiry, agentic web guidance, and case study lookup. The agent is registered on a2aregistry.org (ID: ef952ba9).
A2A endpoint at p0stman.com/api/agent handles JSON-RPC 2.0 tasks. Tasks are processed by Gemini 2.0 Flash with a system prompt that includes p0stman's service knowledge, case studies, and pricing context. The agent responds in Zero's voice (p0stman's AI persona).
Logging goes to the agent_sessions Supabase table, capturing every interaction for analytics. The admin dashboard shows A2A task volume, common queries, and response quality metrics.
MCP integration runs alongside A2A. The MCP server at p0stman.com/api/mcp exposes tools that the A2A agent can also use internally. The A2A agent is the external-facing interface; MCP is the internal tool layer. This demonstrates the complementary relationship between the two protocols in practice.
The Future: A2A + MCP Together
The agentic web is converging on a clear architecture: MCP for tool access, A2A for agent communication. Together, they form the complete protocol stack for AI-native applications.
Near-term (2026): Expect A2A support in all major AI assistants. ChatGPT, Claude, Gemini, and Copilot will all be able to discover and communicate with A2A agents. This means your AgentCard will be visible to every major AI tool, not just Google's products. Framework support will deepen, with A2A becoming a standard feature rather than a plugin.
Medium-term (2027): Agent marketplaces will emerge, similar to app stores but for AI agents. Businesses will publish agents that other businesses' agents can discover and use. Payment and billing between agents will become standardised. The A2A protocol will likely add support for agent reputation, quality scoring, and SLA guarantees.
Long-term: The distinction between "website" and "agent" will blur. Every business's online presence will include both a human-facing website and an agent-facing A2A endpoint. Customer acquisition will happen through agent-to-agent negotiation as often as through human web browsing. The businesses that build their A2A presence now are laying the foundation for this future.
The investment required is modest — an AgentCard takes 30 minutes, a basic A2A endpoint takes a day — but the strategic value compounds over time as the ecosystem grows. Early movers in A2A will have established agent reputations, registry rankings, and operational experience before their competitors begin.
Frequently Asked Questions
What is the A2A protocol in simple terms?
The A2A (Agent-to-Agent) protocol is an open standard that lets AI agents communicate with each other to complete tasks. Think of it like email for AI agents — one agent can send a task to another agent, the receiving agent processes it, and sends back a result. It was created by Google and is backed by 50+ companies including Salesforce, SAP, Atlassian, and MongoDB. A2A enables multi-agent workflows where specialised agents collaborate without human intervention.
What is the difference between MCP and A2A?
MCP (Model Context Protocol) connects AI models to tools — it's like a USB port that lets an AI call functions (search a database, send an email, read a file). A2A connects AI agents to other AI agents — it's like email that lets one intelligent system delegate work to another. MCP is synchronous and tool-oriented: call a function, get a result. A2A is task-oriented and supports long-running workflows: send a task, negotiate, get a result over time. They are complementary, not competing. Most production systems will use both — MCP for tool access and A2A for agent coordination.
Who created the A2A protocol and who supports it?
Google created the A2A protocol and announced it in April 2025. The protocol launched with over 50 founding partners, including Salesforce, SAP, Atlassian, MongoDB, Deloitte, Accenture, McKinsey, ServiceNow, Workday, Box, C3.ai, Cohere, Intuit, LangChain, LiveKit, Replit, and UiPath. The breadth of support — spanning enterprise software, consulting, AI infrastructure, and developer tools — signals that A2A is likely to become the dominant standard for agent-to-agent communication.
What is an AgentCard in A2A?
An AgentCard is a JSON file placed at .well-known/agent.json on your domain that describes your AI agent to other agents. It includes the agent's name, description, skills, supported input and output modes (text, images, files), authentication requirements, capabilities (streaming, push notifications), and the URL of the A2A endpoint. Other agents fetch your AgentCard to decide whether your agent can help with their task and how to communicate with it. It's essentially a business card for AI agents — the first step in automated agent-to-agent discovery.
How does the A2A task lifecycle work?
An A2A task moves through defined states: submitted (task received), working (agent is processing), input-required (agent needs more information from the requester), completed (task finished successfully), failed (task could not be completed), or canceled (task was abandoned). The lifecycle supports multi-turn conversations — an agent can ask for clarification before completing a task. Tasks are sent via JSON-RPC 2.0 POST requests to the A2A endpoint. Each task has a unique ID, a message with parts (text, files, structured data), and returns artifacts (the results).
Can A2A and MCP work together?
Yes, A2A and MCP are designed to be complementary. A typical architecture uses MCP internally (connecting an agent to its tools — databases, APIs, file systems) and A2A externally (communicating with other agents). For example, a travel booking agent might use MCP to access its flight database and pricing engine internally, while using A2A to communicate with a hotel agent and a car rental agent to assemble a complete itinerary. The agent's internal tool access is MCP; its external collaboration is A2A.
How do I implement an A2A endpoint?
An A2A endpoint is an HTTP route that handles JSON-RPC 2.0 requests. It needs three capabilities: GET returns the AgentCard (agent discovery), OPTIONS handles CORS preflight (cross-origin access), and POST processes tasks (the core A2A interaction). When a POST arrives with method "tasks/send", your endpoint extracts the task message, processes it (typically by passing it to an LLM with your agent's context), and returns a structured response with artifacts. In Next.js, this is a single route.ts file with GET, OPTIONS, and POST handlers. The entire implementation is typically under 150 lines of code.
Is A2A secure? How does authentication work?
A2A supports multiple authentication schemes declared in the AgentCard. The most common is Bearer token authentication — the requesting agent includes a token in the Authorization header. The specification also supports OAuth 2.0 for more complex scenarios. Additionally, A2A endpoints should validate the requesting agent's identity, implement rate limiting to prevent abuse, and log all interactions for auditing. For public-facing agents that handle general queries, you might allow unauthenticated access to a limited skill set while requiring authentication for sensitive operations like data access or transactions.
Add A2A to your website
We build production A2A endpoints, AgentCards, and the full agentic web stack. Make your business discoverable and actionable by AI agents.