p0stman

What Is agents.md? Agent Instructions for Websites and Repos

Last updated: March 2026

agents.md is a Markdown file that provides AI agents with detailed instructions for interacting with your website or codebase. For websites, it describes available tools, authentication, and usage patterns. For GitHub repositories, it guides coding AI agents like Claude Code and OpenAI Codex on code style, architecture, and project conventions. It serves as the human-readable companion to machine-readable protocols like MCP and A2A.

AI agents are becoming a primary way that both users and machines interact with digital products. But agents need instructions. A human can look at a website, read navigation labels, and figure out what to do. An AI agent needs explicit guidance: what tools are available, how to authenticate, what rate limits apply, and how to handle errors. That's what agents.md provides.

The term "agents.md" has evolved to mean two related but distinct things. The first is a website agents.md — a file served from your website that tells visiting AI agents how to interact with your site's APIs and tools. The second is a GitHub AGENTS.md — a file in your repository root that tells coding AI agents how to work on your codebase. Both use Markdown. Both address AI agents. But they serve different audiences and contain different information.

Understanding both types — and when to use each — is essential for any team building for the agentic web.

Website agents.md: Instructions for AI Visitors

A website agents.md is a Markdown file placed at /agents.md (or /public/agents.md in Next.js projects) that provides detailed operating instructions for AI agents visiting your site. If llms.txt is the brochure, agents.md is the operating manual.

What Belongs in a Website agents.md

A well-structured website agents.md covers everything an AI agent needs to successfully interact with your site:

1. Site Overview and Purpose

Start with a clear, factual description of what your site offers. This is similar to the llms.txt blockquote but can be more detailed — 3-5 sentences covering the product, its primary users, and the key capabilities available to agents.

2. Available Tools and Endpoints

List every API endpoint or tool that agents can use. For each, include:

  • The endpoint URL and HTTP method
  • A clear description of what it does
  • Input parameters with types and whether they're required
  • Example request body
  • Example response
  • Error codes and their meanings

3. Authentication Requirements

Specify how agents authenticate. Common patterns:

  • No auth required — for public endpoints like site context or search
  • API key — header name, how to obtain a key, key format
  • OAuth 2.0 — flow type (client credentials, authorization code), token endpoint, scopes
  • Bearer token — where to send it (Authorization header), token lifetime

4. Rate Limits and Usage Policies

Be explicit about limits. AI agents can make requests very quickly — tell them the boundaries:

  • Requests per minute/hour/day
  • Rate limit headers to watch for
  • What happens when limits are exceeded (429 response, retry-after header)
  • Any different limits for authenticated vs unauthenticated requests

5. Error Handling Guidance

Tell agents what errors look like and how to handle them:

## Error Handling

All errors return JSON with this structure:
{
  "error": {
    "code": "RATE_LIMITED",
    "message": "Too many requests. Retry after 60 seconds.",
    "retryAfter": 60
  }
}

Common error codes:
- 400 BAD_REQUEST: Invalid parameters. Check the request body.
- 401 UNAUTHORIZED: Missing or invalid API key.
- 403 FORBIDDEN: Valid key but insufficient permissions.
- 404 NOT_FOUND: Resource doesn't exist.
- 429 RATE_LIMITED: Too many requests. Use the retryAfter value.
- 500 INTERNAL_ERROR: Our fault. Retry once, then report.

6. Example Interactions

Include 2-3 complete example interactions showing request and response. AI agents learn from examples. Show the happy path first, then an error case.

Complete Example: Website agents.md for a SaaS Product

# TaskHub Agent Instructions

TaskHub is a task management platform for software teams. This document
describes how AI agents can interact with TaskHub's public APIs and tools.

## Quick Start

1. Get an API key at https://taskhub.com/settings/api
2. All requests use `Authorization: Bearer YOUR_API_KEY`
3. Base URL: `https://api.taskhub.com/v2`
4. Rate limit: 100 requests/minute (authenticated), 10/minute (public)

## Available Tools

### Search Tasks (Public, No Auth)
- **Endpoint:** GET /api/search
- **Description:** Search public task boards by keyword
- **Parameters:**
  - `q` (string, required): Search query
  - `board` (string, optional): Filter by board ID
  - `limit` (number, optional): Results per page, max 50, default 20
- **Example:**
  ```
  GET https://api.taskhub.com/v2/search?q=authentication+bug&limit=5
  ```
- **Response:** `{ "results": [{ "id": "t_123", "title": "Fix OAuth bug", "status": "open" }], "total": 1 }`

### Create Task (Auth Required)
- **Endpoint:** POST /api/tasks
- **Description:** Create a new task in a project
- **Parameters:**
  - `projectId` (string, required): Target project
  - `title` (string, required): Task title, max 200 chars
  - `description` (string, optional): Markdown supported
  - `priority` (string, optional): "low", "medium", "high", "critical"
  - `assignee` (string, optional): User ID
- **Example:**
  ```json
  POST https://api.taskhub.com/v2/tasks
  Authorization: Bearer tk_live_abc123
  Content-Type: application/json

  {
    "projectId": "proj_456",
    "title": "Update login page design",
    "priority": "medium"
  }
  ```
- **Response:** `{ "task": { "id": "t_789", "title": "Update login page design", "status": "open", "createdAt": "2026-03-11T10:00:00Z" } }`

### Get Project Summary (Auth Required)
- **Endpoint:** GET /api/projects/:id/summary
- **Description:** Returns task counts by status, recent activity, team members
- **Rate limit:** 20 requests/minute (this endpoint is heavier)

## Authentication

- **Method:** Bearer token in Authorization header
- **Get a key:** https://taskhub.com/settings/api
- **Key format:** `tk_live_` prefix for production, `tk_test_` for sandbox
- **Scopes:** `tasks:read`, `tasks:write`, `projects:read`, `projects:write`
- **Key rotation:** Keys expire after 90 days. Refresh via the dashboard.

## Rate Limits

| Tier | Limit | Reset |
|------|-------|-------|
| Public (no auth) | 10 req/min | Rolling window |
| Free plan | 100 req/min | Rolling window |
| Pro plan | 1,000 req/min | Rolling window |
| Enterprise | Custom | Custom |

Watch for `X-RateLimit-Remaining` and `X-RateLimit-Reset` headers.

## Error Handling

Errors return JSON: `{ "error": { "code": "ERROR_CODE", "message": "Description" } }`

| Code | Status | Meaning |
|------|--------|---------|
| BAD_REQUEST | 400 | Invalid parameters |
| UNAUTHORIZED | 401 | Missing or invalid API key |
| RATE_LIMITED | 429 | Too many requests, check Retry-After header |
| NOT_FOUND | 404 | Resource doesn't exist |
| INTERNAL | 500 | Server error, retry once |

## MCP Integration

TaskHub also exposes tools via MCP at `https://taskhub.com/api/mcp`.
See `/mcp.json` for the machine-readable manifest.

## Links

- API Reference: https://docs.taskhub.com/api
- MCP Manifest: https://taskhub.com/mcp.json
- Status Page: https://status.taskhub.com
- Support: support@taskhub.com

How Website agents.md Differs from llms.txt

The distinction is fundamental: llms.txt tells AI what your site is. agents.md tells AI how to use your site.

Aspect llms.txt agents.md (website)
Primary question answered"What is this site?""How do I interact with this site?"
Typical length30-80 lines200-2,000 lines
FormatSimplified Markdown (H1, H2, links)Full Markdown with code blocks, tables
Contains code examples?NoYes — request/response pairs
Contains authentication details?Brief mention at mostFull auth flow documentation
Contains tool schemas?No — just names and descriptionsYes — full parameter specifications
Contains rate limits?May mention they existSpecific numbers and tiers
Who reads itLLMs doing general researchAI agents performing actions
Update frequencyWhen major offerings changeWhen APIs, tools, or auth change

An AI agent arriving at your site typically reads llms.txt first (quick relevance check), then agents.md (detailed operating instructions), then potentially calls your MCP endpoint or A2A agent to actually perform actions. The three files form a progression from discovery to comprehension to interaction.

How agents.md Connects to MCP and A2A

agents.md is the human-readable layer in a stack that includes machine-readable protocols. Here's how they relate:

File/Protocol Format Audience Purpose
agents.mdMarkdown (human-readable)AI agents, developersDetailed interaction instructions
mcp.jsonJSON (machine-readable)MCP clientsTool manifest — names, schemas, endpoint
/api/mcpMCP protocolMCP clientsExecutable tool endpoint
.well-known/agent.jsonJSON (machine-readable)A2A agentsAgent capabilities, skills, auth
/api/agentJSON-RPC 2.0A2A agentsAgent-to-agent task execution

The relationship is complementary, not competitive. agents.md provides context and nuance that structured JSON cannot easily convey — things like "prefer the search endpoint over listing all items" or "always check rate limit headers before making batch requests". Machine-readable manifests like mcp.json provide the precise schemas that agents need to construct valid requests.

In practice, a sophisticated AI agent reads agents.md for understanding and strategy, then uses mcp.json or agent.json for the actual tool invocation. The Markdown tells it why and when to use a tool; the JSON tells it how.

Writing Guide: Creating an Effective Website agents.md

Here's a step-by-step process for writing a website agents.md that AI agents can actually use.

Step 1: Inventory Your Capabilities

Before writing anything, list every way an AI agent could interact with your site. This includes:

  • Public API endpoints
  • Search functionality
  • Content that agents might want to reference
  • Forms or actions agents might trigger
  • MCP tools you expose
  • A2A skills your agent supports

Step 2: Start with the Quick Start

Agents are impatient. Put the most important information first: how to authenticate, the base URL, and rate limits. An agent should be able to make its first successful API call after reading the first 20 lines.

Step 3: Document Each Tool

For every endpoint or tool, include the same structure: endpoint, method, description, parameters (with types and required/optional), example request, example response. Consistency matters — agents parse structured patterns better than freeform prose.

Step 4: Add Guidance, Not Just Specs

This is where agents.md provides value beyond a JSON schema. Include:

  • When to use one endpoint vs another
  • Common workflows (e.g. "to create a complete project, first create the project, then add members, then create the first sprint")
  • Pitfalls to avoid (e.g. "the search endpoint returns a maximum of 50 results per page — always check the total and paginate if needed")
  • Performance tips (e.g. "use the summary endpoint instead of fetching all tasks individually")

Step 5: Link to Related Resources

End with links to your mcp.json, agent.json, API reference, and support channels. This connects the human-readable instructions to the machine-readable manifests and provides escalation paths when something goes wrong.

GitHub AGENTS.md: Instructions for Coding AI Agents

The second type of agents.md lives in your repository root and tells coding AI agents — Claude Code, OpenAI Codex, GitHub Copilot, Cursor, Windsurf — how to work on your codebase. This emerged from the broader trend of AI-assisted development, where teams realised that giving AI agents project-specific context dramatically improves code quality.

The Origin: From CLAUDE.md to AGENTS.md

The convention started with tool-specific files. Anthropic introduced CLAUDE.md for Claude Code — a Markdown file that's automatically loaded into every conversation, giving the agent project context at session start. GitHub Copilot uses .github/copilot-instructions.md. Cursor uses .cursor/rules/. Google's Jules uses JULES.md. JetBrains Junie uses .junie/guidelines.md.

The proliferation of tool-specific files created an obvious problem: maintaining five different instruction files with essentially the same content. In early 2026, Sourcegraph proposed AGENTS.md as a universal standard, gaining backing from OpenAI, Google, and GitHub Copilot. The idea: one file that all coding agents read.

What Belongs in a GitHub AGENTS.md

A good GitHub AGENTS.md covers everything a new developer (human or AI) needs to start contributing effectively:

Project Overview

What the project is, what it does, and who uses it. 2-3 sentences. This orients the agent before it reads any code.

Tech Stack

Framework, language version, key dependencies, and their purposes. Be specific: "Next.js 15 with App Router" not "React app".

Setup Commands

Every command the agent needs to set up, build, test, and lint the project:

## Commands
npm install          # Install dependencies
npm run dev          # Development server at localhost:3000
npm run build        # Production build (catches type errors)
npm run lint         # ESLint
npm test             # Run test suite
npm run test:e2e     # Playwright end-to-end tests

Code Style Rules

Formatting, naming conventions, import order, preferred patterns. Be specific enough that the agent produces code that matches existing style:

## Code Style
- TypeScript strict mode, no `any` unless unavoidable
- 2-space indentation
- Named exports over default exports
- async/await over .then() chains
- Prefer const over let
- No console.log in committed code
- File naming: components PascalCase, utilities camelCase, configs kebab-case

Directory Structure

A tree of the key directories with brief descriptions. This saves the agent from having to explore the entire project to understand the layout.

Forbidden Patterns

Explicitly list things the agent should never do. This is surprisingly effective:

## Forbidden
- NEVER commit API keys or secrets to git
- NEVER use emojis in code or UI — use Lucide icons
- NEVER add default exports
- NEVER use `var` — always `const` or `let`
- NEVER modify the database schema without a migration file
- NEVER push directly to main — use feature branches

Testing Requirements

What must be tested, how to run tests, and any conventions:

## Testing
- All API routes must have integration tests
- Use vitest for unit tests, Playwright for E2E
- Test files go next to the source file: `utils.ts` → `utils.test.ts`
- Run `npm run build` before considering any change complete
- Minimum: test the happy path and one error case per endpoint

Complete Example: GitHub AGENTS.md for a Codebase

# AGENTS.md — TaskHub Web Application

## Project Overview

TaskHub is a task management platform for software teams. Next.js 15 App
Router frontend, Supabase backend (PostgreSQL + Auth), deployed on Vercel.
Live at taskhub.com.

## Tech Stack

- Framework: Next.js 15, App Router, TypeScript 5.4
- Styling: Tailwind CSS v4
- Backend: Supabase (auth, database, storage, realtime)
- AI: Gemini 2.0 Flash for task summaries and natural language search
- Email: Resend for transactional emails
- Testing: Vitest (unit), Playwright (E2E)
- Icons: Lucide React (never emojis)

## Commands

```bash
npm install          # Install dependencies
npm run dev          # Dev server at localhost:3000
npm run build        # Production build — run before every commit
npm run lint         # ESLint
npm test             # Vitest unit tests
npm run test:e2e     # Playwright E2E tests
```

## Project Structure

```
src/
├── app/               # Next.js App Router
│   ├── (auth)/        # Login, signup, password reset
│   ├── (dashboard)/   # Main app (requires auth)
│   ├── api/           # API routes
│   └── layout.tsx     # Root layout
├── components/
│   ├── ui/            # Reusable primitives (Button, Input, Dialog)
│   ├── tasks/         # Task-specific components
│   └── projects/      # Project-specific components
├── hooks/             # Custom React hooks
├── lib/               # Supabase client, utilities
├── types/             # TypeScript type definitions
└── config/            # App configuration
```

## Code Style

- TypeScript strict mode — no `any` unless absolutely necessary
- 2-space indentation
- Named exports only — no default exports
- async/await over .then() chains
- File naming: PascalCase for components, camelCase for hooks/utils
- Import order: React → Next → external libs → internal → types → styles
- Prefer server components; add "use client" only when needed

## Database

- All schema changes go through Supabase migrations
- Never modify tables directly in the dashboard
- RLS (Row Level Security) is enabled on all tables
- Use the typed Supabase client from `lib/supabase/`

## Environment Variables

- All secrets in `.env.local` (never committed)
- Use `NEXT_PUBLIC_` prefix only for client-safe values
- Required vars documented in `.env.example`

## Forbidden

- NEVER commit secrets, API keys, or .env files
- NEVER use emojis in code, UI, or documentation
- NEVER add default exports
- NEVER skip the build check before committing
- NEVER modify RLS policies without review
- NEVER use `var` or untyped `any`
- NEVER add console.log to committed code

## Testing

- Unit tests: `*.test.ts` next to source files
- E2E tests: `tests/e2e/*.spec.ts`
- Every API route needs at least one integration test
- Run `npm run build` before considering any change done
- Test the happy path + one error case minimum

## Deployment

- Push to main triggers Vercel auto-deploy
- Preview deployments on PR branches
- Environment variables configured in Vercel dashboard

AGENTS.md vs CLAUDE.md vs .cursorrules: The Naming Landscape

The coding agent instruction file space is fragmented across vendors. Here's the current state as of early 2026:

Tool File/Location Scope AGENTS.md Support
Claude CodeCLAUDE.md (project root + ~/.claude/CLAUDE.md)Auto-loaded every sessionBeing added
GitHub Copilot.github/copilot-instructions.mdRepo-level instructionsSupported
OpenAI CodexAGENTS.mdTask contextPrimary file
Cursor.cursor/rules/ directoryRule files for different contextsReads AGENTS.md
Windsurf.windsurfrulesProject-level rulesPlanned
Google JulesJULES.mdRepository instructionsPlanned
JetBrains Junie.junie/guidelines.mdIDE agent contextPlanned

The Convergence Toward AGENTS.md

The industry is moving toward AGENTS.md as the universal standard. The logic is compelling: maintaining separate files with identical content for five different tools is wasteful and error-prone. When Sourcegraph proposed the standard with backing from major vendors, the direction became clear.

The practical recommendation today: create both AGENTS.md and your tool-specific file (e.g. CLAUDE.md for Claude Code users). Use AGENTS.md as the canonical source and either duplicate content or create symlinks for backwards compatibility. As tools add native AGENTS.md support, the tool-specific files can be retired.

Symlink Strategy

# Create AGENTS.md as the canonical file
# Then symlink for backward compatibility

ln -s AGENTS.md CLAUDE.md
ln -s AGENTS.md GEMINI.md

# Or if you need vendor-specific additions,
# have AGENTS.md as shared content and separate files for overrides

Note: Claude Code currently reads CLAUDE.md and has its own hierarchical loading (project-level, user-level, and directory-level). If you're a Claude Code user, keep CLAUDE.md as your primary file for now and mirror important content to AGENTS.md for other tools.

Should You Have Both Types of agents.md?

The decision depends on what your project is:

Scenario Website agents.md GitHub AGENTS.md
Public website with APIYesIf open source
Private SaaS codebaseYes (on the live site)Yes (in the repo)
Open source libraryIf it has a docs siteYes
Internal toolNo (not public-facing)Yes
Content site / blogProbably not neededIf you use AI coding tools
API-only productYesYes

The two files don't conflict. They live in different locations (one in public/ for the web, one in the repo root for coding agents) and serve different audiences. Having both is the most complete approach.

The Full Discovery Stack

For a complete agentic web presence, your site should have:

  • llms.txt — Quick summary (50-80 lines)
  • agents.md — Detailed interaction instructions (200-2,000 lines)
  • context.md — Deep business context
  • mcp.json — Machine-readable tool manifest
  • .well-known/agent.json — A2A agent card with skills
  • JSON-LD schema — Structured data on every page

And in your repository:

  • AGENTS.md — Universal coding agent instructions
  • CLAUDE.md — Claude Code-specific additions (if using Claude Code)
  • .cursor/rules/ — Cursor-specific rules (if using Cursor)

Practical Tips for Maintaining agents.md

Keep It Current

An outdated agents.md is worse than no agents.md. If your API changes and the examples in agents.md show the old schema, AI agents will generate broken requests. Make updating agents.md part of your API change process — the same way you update API docs.

Write for an AI, Not a Human

AI agents parse structured content better than prose. Use consistent heading levels, bullet lists, code blocks, and tables. Avoid narrative paragraphs where a structured list would be clearer. That said, natural language guidance ("prefer X over Y when Z") is valuable — just put it alongside structured specs, not instead of them.

Test with Real Agents

The best test for an agents.md is using it. Give an AI agent access to your file and ask it to perform a task on your site. If it succeeds without additional guidance, your agents.md is good. If it fails or asks clarifying questions, those gaps tell you what to add.

Start Small, Iterate

You don't need to document every endpoint on day one. Start with the 3-5 most important tools, get the structure right, then expand. A focused agents.md covering your core API is better than a sprawling one that tries to cover everything but gets details wrong.

Frequently Asked Questions

What is agents.md?

agents.md is a Markdown file that provides AI agents with detailed instructions for interacting with your website or codebase. For websites, it describes available tools, authentication methods, rate limits, and usage patterns. For GitHub repositories, it guides coding AI agents like Claude Code and OpenAI Codex on project structure, code style, and development workflows.

What is the difference between website agents.md and GitHub AGENTS.md?

Website agents.md (placed at /agents.md or /public/agents.md) tells AI agents how to interact with your website — available API endpoints, authentication, tools, and usage policies. GitHub AGENTS.md (placed in the repository root) tells coding AI agents how to work on your codebase — code style, architecture, test commands, and forbidden patterns. Same format, different audiences.

How is agents.md different from llms.txt?

llms.txt is a brief summary (under 100 lines) that tells AI what your site is and offers. agents.md is a detailed instruction document (200-2,000+ lines) that tells AI agents how to interact with your site. llms.txt is the elevator pitch; agents.md is the instruction manual. You can use both — llms.txt for discovery, agents.md for interaction.

What is the difference between AGENTS.md and CLAUDE.md?

CLAUDE.md is specific to Claude Code — it's automatically loaded into every conversation. AGENTS.md is an emerging universal standard backed by Sourcegraph, with support from GitHub Copilot, OpenAI, and Google. The convention is moving toward AGENTS.md as the single file all coding agents read, with tool-specific files like CLAUDE.md as optional additions for vendor-specific instructions.

What should I include in a website agents.md?

Include: a summary of your site and its capabilities, available API endpoints and tools with their schemas, authentication requirements and methods, rate limits and usage policies, error handling guidance, example request/response pairs, and links to deeper documentation. Write it as instructions addressed to an AI agent.

What should I include in a GitHub AGENTS.md?

Include: project overview and architecture, setup commands (install, dev server, build, test), code style rules (formatting, naming conventions, patterns), directory structure explanation, key dependencies and their purpose, forbidden patterns and common pitfalls, and testing requirements. Write it as instructions for an AI coding assistant.

Do AI agents actually read agents.md?

For coding agents, yes. Claude Code reads CLAUDE.md (and is adding AGENTS.md support), GitHub Copilot reads .github/copilot-instructions.md and AGENTS.md, and OpenAI Codex reads AGENTS.md. For website agents, consumption depends on the agent — agentic tools and research agents increasingly look for agents.md alongside llms.txt and mcp.json.

Should I have both a website agents.md and a GitHub AGENTS.md?

If your project is both a website and an open-source codebase, yes. The website agents.md goes in your public directory and tells visiting AI agents about your site's capabilities. The GitHub AGENTS.md goes in your repo root and tells coding agents about your codebase. They serve different audiences and don't conflict.

Make Your Website Agent-Ready

We build the full agentic web stack — agents.md, llms.txt, MCP servers, A2A endpoints, and structured schema — so AI agents can discover, understand, and interact with your business.