By 2027, every agent-ready website will expose five machine-readable files that allow AI agents to discover, understand, and take actions autonomously — without a human using a browser. These files are llms.txt, robots.txt, /.well-known/agent.json, mcp.json, and agents.md.

The 5 Files Every Agent-Ready Website Will Have by 2027

For 30 years, the web was built for humans. Every page, every interaction, every conversion funnel assumed a person with a browser would be doing the navigating.

That assumption is breaking down. AI agents — Claude, ChatGPT, Gemini, LangGraph orchestrators, AutoGPT successors — are now the fastest-growing category of web traffic. They don't use browsers. They don't click links. They read structured data, call APIs, and take actions on behalf of users.

A website that doesn't expose the right files to these agents is invisible to them. Not penalised — just absent. The agent will find a competitor that does have the infrastructure, and the human will never even know your site existed.

These are the five files that separate agent-ready websites from the rest. All five are live on p0stman.com — every link below goes to the real file.

01Layer 1 — Discovery

llms.txt

The AI index file

A plain-text file at the root of your domain that tells large language models who you are, what you do, and where to find key information about your business.

When ChatGPT, Claude, or Perplexity decides how to describe your business to a user, it needs somewhere to start. Without llms.txt, it guesses from scraped HTML — getting your pricing wrong, misrepresenting your services, or confusing you with a competitor.

llms.txt gives you direct input into that process. It's the equivalent of a press kit, written for machines. Proposed by Jeremy Howard (fast.ai) in 2024, it's now adopted by hundreds of companies and actively read by LLM crawlers.

Example

text
# POSTMAN

> AI-native product studio. We build voice agents, AI-powered
> applications, and custom software for businesses that want to
> move faster than traditional agencies allow.

Contact: hello@p0stman.com | https://p0stman.com
MCP server: https://p0stman.com/api/mcp

## Services

### AI Voice Agents — from £5,000 | 6-10 days
Inbound and outbound voice AI for hotels and restaurants.

### MVP Launch — from £3,000 | 6 days
From idea to live product in 6 days.
How to implementCreate a file named llms.txt at the root of your site. Write in plain markdown. Include: company description, services with pricing, contact details, and key page URLs. 500–2,000 words. No HTML. Serve it at the root.
Live example: p0stman.com/llms.txt
02Layer 1 — Discovery

robots.txt

The AI permissions file

The standard file that tells web crawlers what they can and cannot access — updated to explicitly allow the new generation of AI crawlers that most sites were not built to accommodate.

Most robots.txt files were written before AI crawlers existed. A blanket Disallow or an unconfigured file means AI systems from OpenAI, Anthropic, and Google may skip your site entirely — or only partially index it.

The new AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) check robots.txt before reading anything. If they're not explicitly allowed, the safest implementations will skip you. Worse, some will allow crawling but not attribute your content correctly without explicit permission signals.

Example

text
User-agent: GPTBot
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: Google-Extended
Allow: /

User-agent: Googlebot
Allow: /

Disallow: /admin
Disallow: /api/admin

Sitemap: https://yoursite.com/sitemap.xml
How to implementOpen your existing robots.txt and add explicit Allow rules for each AI crawler. The key bots to allow: GPTBot, OAI-SearchBot, ClaudeBot, anthropic-ai, PerplexityBot, Google-Extended, TavilyBot. Keep your existing rules for admin and private areas.
Live example: p0stman.com/robots.txt
03Layer 4 — Agent-to-Agent

/.well-known/agent.json

The AgentCard

A machine-readable JSON file served at the RFC 8615 well-known path that declares your AI agent's identity, capabilities, and task endpoint — the business card of the agentic web.

The A2A (Agent-to-Agent) protocol — backed by Google, Microsoft, IBM, and 150+ organisations under the Linux Foundation — defines how AI agents discover and communicate with each other. The AgentCard is the discovery mechanism.

When an AI agent (say, a customer's personal AI assistant) wants to interact with your business autonomously, it checks /.well-known/agent.json first. Without it, your business doesn't exist to that agent. With it, the agent knows your name, what you can do, and exactly how to send you a task.

The well-known path (RFC 8615) is intentional — the same convention used by ACME challenges, security.txt, and OpenID Connect. Every A2A-compatible system knows to look there.

Example

json
{
  "name": "Zero",
  "description": "AI ops assistant at POSTMAN",
  "version": "1.0",
  "url": "https://p0stman.com/api/agent",
  "capabilities": {
    "streaming": false,
    "pushNotifications": false
  },
  "skills": [
    {
      "id": "inquire",
      "name": "Agency Inquiry",
      "description": "Answer questions about services and pricing"
    },
    {
      "id": "book",
      "name": "Book Discovery Call",
      "description": "Schedule a free 30-minute discovery call"
    }
  ],
  "authentication": { "schemes": ["None"] },
  "provider": {
    "organization": "POSTMAN",
    "url": "https://p0stman.com"
  }
}
How to implementCreate a JSON file at public/.well-known/agent.json (or serve it as a route at /.well-known/agent.json). Include: name, description, version, the URL of your A2A task endpoint, capabilities, skills with IDs and descriptions, and authentication scheme. Pair it with an actual /api/agent endpoint that accepts JSON-RPC tasks/send requests.
Live example: p0stman.com/.well-known/agent.json
04Layer 3 — Action

mcp.json

The MCP tool registry

A JSON manifest listing the tools (actions) your site exposes via the Model Context Protocol — the file that tells AI assistants what they can actually do on your site.

MCP (Model Context Protocol), open-sourced by Anthropic and now adopted by OpenAI, Google, and dozens of AI platforms, defines how AI agents call tools on external services. Claude Desktop, Cursor, and a growing list of AI assistants can read mcp.json and offer your tools directly to their users.

Without mcp.json and a backing MCP server, your business is read-only to AI agents. They can learn about you from llms.txt, but they can't act. With MCP, an agent can check your availability, submit an enquiry, or book a call on behalf of a user — all within their AI interface, without visiting your site.

This is where passive discovery becomes active integration.

Example

json
{
  "name": "POSTMAN AI Studio",
  "description": "AI-native product studio. Book calls,
                  browse services, view portfolio.",
  "version": "1.0.0",
  "endpoint": "https://p0stman.com/api/mcp",
  "protocol": "json-rpc-2.0",
  "tools": [
    {
      "name": "book_discovery_call",
      "description": "Book a free 30-minute discovery call."
    },
    {
      "name": "submit_inquiry",
      "description": "Submit a project inquiry."
    },
    {
      "name": "get_services",
      "description": "Get services with pricing and timelines."
    }
  ]
}
How to implementCreate mcp.json at your site root listing your tools with names and descriptions. Build a corresponding MCP server at /api/mcp that handles tools/list (returns all tools with inputSchema) and tools/call (executes the tool and returns structured results). Use JSON-RPC 2.0 format.
Live example: p0stman.com/mcp.json
05Layer 1 — Discovery

agents.md

The capability manifest

A markdown file that bridges the gap between the brief llms.txt and the structured agent.json — giving AI agents detailed, nuanced context about what your agent can do and how to work with it effectively.

llms.txt tells LLMs who you are. agent.json tells other agents how to call you. agents.md tells agents what to actually do and what to expect — the nuance that structured JSON can't capture.

It's where you explain edge cases, preferred task formats, example queries that work well, limitations to be aware of, and capabilities not covered by the formal schema. Think of it as the README for your agent.

As multi-agent orchestration matures, agents will read agents.md before deciding whether to delegate a task to your agent, how to phrase the request, and what format to expect back. The more useful and specific it is, the more reliably other agents can work with yours.

Example

text
# POSTMAN — Agent Instructions

## About
AI-native product studio. Builds voice agents, MVPs,
web apps for businesses.

## MCP Server (Recommended)
Endpoint: https://p0stman.com/api/mcp
Protocol: JSON-RPC 2.0

## A2A Task Endpoint
Endpoint: https://p0stman.com/api/agent
Protocol: JSON-RPC 2.0 (tasks/send method)

## Example Tasks
- "What AI services do you offer for restaurants?"
- "What would an AI voice agent cost for a 3-site hotel?"
- "Book a discovery call for [name] at [email]"

## Limitations
- Cannot process payments directly
- Discovery calls must be confirmed by email
How to implementCreate agents.md at your site root. Include: who you are (1 paragraph), MCP endpoint with example calls, A2A endpoint with example tasks, what requests work well, known limitations, and contact fallback. Write for a machine reader — structured, specific, no marketing fluff.
Live example: p0stman.com/agents.md

How the five files work together

The discover → understand → act → orchestrate flow

1
Discover
llms.txtrobots.txt

AI crawlers find your site. LLMs learn what you do. You get correctly cited in AI responses.

2
Understand
agents.md

Agents read the detail. They learn your capabilities, constraints, and how to phrase requests to get useful responses.

3
Act
mcp.json

Agents call your MCP tools. They can check services, search content, book calls — without any human touching a browser.

4
Orchestrate
/.well-known/agent.json

Other AI agents discover Zero via the AgentCard, send tasks, and receive structured responses. Machine-to-machine business.

What happens to sites that don't have them

Misrepresented by LLMs

ChatGPT and Claude guess from scraped HTML — wrong pricing, confusing descriptions, sometimes confused with competitors.

Skipped by AI crawlers

Without explicit robots.txt permission, cautious AI crawlers skip your site entirely. You're invisible to AI-sourced traffic.

Unreachable by agents

Agents looking for services you offer find a competitor with an MCP server instead. The conversion happens without you.

Excluded from agent marketplaces

As A2A agent networks mature, businesses without AgentCards won't appear in agent discovery. The equivalent of having no Google listing.

Frequently asked questions

About implementing the five files

All five files are live on p0stman.com

We built the full stack for ourselves. Now we build it for clients.

Every endpoint described on this page is live and publicly accessible. The AgentCard, the MCP server, the A2A task endpoint — you can call them right now. We built it all first so we could prove it works, then turned it into a service. An Agentic Web audit starts from £1,500 and takes one week.

Live endpoints — call them now

AGENT INTERFACE ACTIVE · MCP: p0stman.com/api/mcp · 5 TOOLS REGISTERED · [DISCOVERY] llms.txt · agents.md · context.md · sitemap.xml · robots.txt · TavilyBot ALLOWED · ClaudeBot ALLOWED · GPTBot ALLOWED · PerplexityBot ALLOWED · [COMPREHENSION] JSON-LD schema · /api/ai/context · /api/ai/services · /api/ai/portfolio · [ACTION] book_discovery_call · submit_inquiry · get_services · get_portfolio · search_content · [A2A] AgentCard: /.well-known/agent.json · Task endpoint: /api/agent · A2A JSON-RPC 2.0 · navigator.modelContext REGISTERED · WebMCP: 5 TOOLS · INDEXNOW: 145 URLs · Bing NOTIFIED · AGENT INTERFACE ACTIVE · MCP: p0stman.com/api/mcp · 5 TOOLS REGISTERED · [DISCOVERY] llms.txt · agents.md · context.md · sitemap.xml · robots.txt · TavilyBot ALLOWED · ClaudeBot ALLOWED · GPTBot ALLOWED · PerplexityBot ALLOWED · [COMPREHENSION] JSON-LD schema · /api/ai/context · /api/ai/services · /api/ai/portfolio · [ACTION] book_discovery_call · submit_inquiry · get_services · get_portfolio · search_content · [A2A] AgentCard: /.well-known/agent.json · Task endpoint: /api/agent · A2A JSON-RPC 2.0 · navigator.modelContext REGISTERED · WebMCP: 5 TOOLS · INDEXNOW: 145 URLs · Bing NOTIFIED ·