POSTMAN

Guide

| February 2026

The web is about to change. Again.

The web was built for humans. Then optimised for search bots. Now AI agents are the next visitor type - and they need something different from your website than either humans or crawlers ever did. Google just shipped the first browser-native standard for making that work.

Published February 18, 2026 · by Paul Gosnell

What actually happened on February 10, 2026

On February 10, 2026, the Google Chrome team announced WebMCP - a new browser API that gives websites a standardised way to expose structured tools to AI agents. It shipped as an early preview in Chrome 146 Canary, behind a flag called "WebMCP for testing" at chrome://flags.

The spec was co-authored by engineers from Google and Microsoft, and it formalised what had been a W3C Community Group deliverable since September 2025. VentureBeat described it as "turning every website into a structured tool for AI agents." That framing is accurate.

What WebMCP is not

WebMCP is not the same as Anthropic's Model Context Protocol (MCP). Anthropic's MCP is a backend protocol - it connects AI models to data sources and tools server-side. WebMCP is a client-side browser API that lets your website expose structured actions directly to AI agents running in the user's browser. Different layer, different purpose, complementary rather than competing.

Right now, AI agents can already browse the web - ChatGPT can open pages, Claude can read them, Gemini can use them. But the way they do it is ungainly. Most agents rely on taking screenshots of the page or scraping the raw DOM, then trying to figure out what they're looking at and what they can click. It works, roughly, in the same way a person who can't read the language can still fill in a form if they squint hard enough. It's slow, error-prone, and brittle.

WebMCP gives websites a native way to bypass all of that. Instead of an agent guessing that the blue button probably submits the form, your site can declare: here is a make_reservation tool, it takes a date, a party size, and a contact email, and it returns a confirmation number. The agent calls it directly. No screenshots. No DOM parsing. Structured JSON in, structured JSON out.

The API has two parts:

  • Declarative API - You define agent-accessible actions directly in your HTML markup. Simple to add, no JavaScript required. Good for static actions like "contact us" or "get a quote."
  • Imperative API - A JavaScript interface for more complex interactions that need dynamic data or conditional logic. Good for booking flows, search, checkout sequences.

Both surfaces communicate with agents via structured JSON, which means agents can reliably parse what your site offers, validate their inputs against your schema, and handle the response programmatically - rather than hoping the confirmation message appears somewhere on the page.

The Chrome Developer Blog post is at developer.chrome.com/blog/webmcp-epp if you want the technical detail.

The three layers of being agent-ready

WebMCP is the most technically significant piece of the puzzle, but it is not the only one. There are three distinct layers to making your site legible and useful to AI agents - and they build on each other.

1

llms.txt

Available now

A plain text file placed at /llms.txt on your domain that tells large language models what your company does, what content you have, and how to interpret your site. Think of it as robots.txt for the AI era - but instead of telling crawlers what not to index, you're giving LLMs a curated summary of who you are and what matters.

POSTMAN already has one at p0stman.com/llm.txt. It takes about an hour to write and costs nothing to deploy. This is the lowest-hanging fruit for any business that wants to show up accurately in AI-generated answers.

2

agents.md

Emerging standard

An emerging format that goes a step further than llms.txt. Where llms.txt describes what your business is, agents.md declares what actions an AI agent can take on your site - before WebMCP standardises the technical plumbing.

This is still evolving as a convention, not yet a formal standard. But forward-thinking sites are already using it to signal to agents: here is what you can do here, here is how to do it, here is what you will get back. It bridges the gap between "LLM knows about us" and "agent can act on our site."

3

WebMCP

Chrome 146 Canary

The browser-native standard. Co-authored by Google and Microsoft engineers, formalised through the W3C Community Group. WebMCP gives developers two concrete APIs - declarative (in HTML) and imperative (in JavaScript) - so that agents can discover what actions your site supports and invoke them via structured JSON without screenshots or DOM scraping.

This is currently in early preview in Chrome Canary. It is not production-ready yet. But the spec is real, the W3C process is running, and the direction of travel is clear. The businesses that understand it now will not be scrambling to implement it when it ships in stable Chrome.

What agents can do on a WebMCP site vs. a legacy site

The practical difference between a WebMCP-ready site and a standard website is significant once AI shopping assistants and personal agents become mainstream. Here is what that looks like for two common scenarios.

Scenario: booking a table at a restaurant

Legacy site (today)

Agent has to guess

The agent takes a screenshot of the page. It tries to identify which element is the booking form. It fills in fields one by one, hoping the date picker accepts the format it's using. It submits. It looks for a confirmation message on the page. If the site uses a modal or redirect, there is a good chance the agent loses track of what happened. The user may or may not get a booking.

WebMCP site

Agent calls a tool directly

The agent discovers your site's make_reservation tool via the WebMCP API. It calls it with structured parameters: { "date": "2026-03-15", "party_size": 4, "contact_email": "..." }. Your site validates the request, creates the booking, and returns a structured confirmation. The agent reads the confirmation ID and tells the user. Done.

Scenario: B2B service inquiry

Legacy site (today)

Agent navigates blind

The agent tries to find a contact form. It may or may not find it depending on how the page is structured. It fills in fields with whatever the user told it, without knowing if those fields match what the form actually wants. It submits and hopes for a confirmation. Multi-step forms, CAPTCHA, or JavaScript-rendered fields can all cause failure silently.

WebMCP site

Agent submits a qualified lead

Your site exposes a request_consultation tool with a typed schema: company name, headcount, use case, budget range. The agent knows exactly what information to gather from the user before calling it. The tool returns a calendar link or confirmation. You receive a properly qualified lead with all the context you actually need.

The analogy that captures this well: the difference between a restaurant with a paper menu that a delivery robot has to photograph and interpret, versus a restaurant with a digital ordering API. Both restaurants serve food. Only one of them is set up for how the ordering ecosystem is evolving.

Who needs to think about this now

Not every business needs to act immediately. WebMCP is still in Canary. But the businesses that will be caught flat-footed are the ones that assume this is a distant concern. It is not. The underlying shift - AI agents acting on behalf of users to complete tasks on websites - is already happening with current browser-use tools. WebMCP standardises and accelerates it.

Here is who should be paying attention right now:

Hotels, restaurants, and service businesses with bookable actions

If your business has a booking or reservation flow, this is directly relevant. AI assistants are already being used to find and book services. When a user asks their AI assistant to "book dinner somewhere good on Friday," you want your restaurant to be findable, understandable, and bookable by that agent. Right now, most booking flows are agent-hostile. WebMCP changes that.

E-commerce

Agents are increasingly being used for shopping research and comparison. "Find me a gift under $100 for a 40-year-old who likes hiking" is a request that an AI assistant can handle by browsing multiple e-commerce sites, comparing options, and surfacing recommendations. Sites that expose structured product data and a checkout tool will be preferred over sites that require screenshot-based browsing.

B2B services

When a prospect's AI assistant is researching vendors on their behalf, your site needs to be legible to that agent. What do you do? What does working with you involve? What is the first step? If an agent can discover and invoke a "book a discovery call" tool directly from your site, your conversion rate from AI-referred traffic will be materially higher than competitors who require a human to navigate the page.

SaaS products

As AI assistants become the layer through which people manage their software subscriptions, sign up for trials, and compare pricing, SaaS products need to be discoverable and actionable by agents. The businesses with clear llms.txt files, structured pricing pages, and eventually WebMCP-exposed trial flows will win more of this traffic.

Anyone whose business depends on web traffic

The broader point is that as AI assistants become the primary interface through which people make decisions online, "being findable" shifts from SEO to LLM-legibility. Getting your site to rank on Google remains important. But ensuring AI assistants describe your business accurately and can act on your site when asked is becoming equally important.

What POSTMAN is building for clients

We build AI products and web applications for businesses. We also build the AI agents themselves. That means we sit on both sides of this shift - we understand what agents need from websites, and we know how to build those things into sites from the start.

Here is what we are doing concretely:

  • Agent-ready foundations on every new build. Every website we build now includes llms.txt as a deliverable, structured data (schema.org) baked into the page architecture, and a review of the site's key actions from an agent's perspective.
  • Advisory on llms.txt and agents.md. For existing client sites, we are running audits and implementing these files as a quick-win engagement. If an AI assistant is describing your business incorrectly or incompletely, a well-written llms.txt can fix that.
  • Watching WebMCP closely. We are tracking the W3C process and Chrome's implementation. When WebMCP is stable enough to implement for production sites, we will be the studio that already knows how to do it correctly - because we have been following it since before it shipped.
  • Building the agents, not just the sites. We build AI agents that use tools and take actions on the web. Understanding the agent side of this interaction makes us better at designing the website side. We know what a well-structured tool response looks like because we write the code that parses it.

Read more about our approach to AI agents in our complete guide to AI agents for business or view our services.

Practical steps you can take today

You do not need to wait for WebMCP to ship in stable Chrome to start getting agent-ready. Here are four things you can act on now.

1. Add llms.txt to your site

Write a plain text file that describes your company, your products or services, your target customer, and links to your key pages. Place it at the root of your domain as /llms.txt. Keep it factual and accurate - this is not marketing copy, it is information for machines.

A basic structure that works:

# Company Name

A one-paragraph description of what you do and who you serve.

## Services
- Service 1: Brief description
- Service 2: Brief description

## Key Pages
- /services - Full services overview
- /case-studies - Client work
- /contact - Get in touch

## Contact
hello@yourcompany.com

2. Add structured data (schema.org)

Schema.org markup has been around for a decade and most sites still do not use it properly. It tells search engines - and increasingly, AI systems - what type of entity your page represents, what your business hours are, what your services cost, how to contact you. This already helps AI assistants understand your content. It is also a prerequisite for WebMCP implementations that want to expose typed tools with validated parameters.

3. Audit your forms and booking flows

Walk through your key conversion points and ask: if an AI agent was trying to complete this, what would break? Common failure modes include multi-step flows that do not preserve state, date pickers that only work via JavaScript mouse interactions, CAPTCHA, and confirmation messages that appear in iframes or modals. Fixing these is good for accessibility and mobile usability too - agent-readiness and human usability are aligned, not in tension.

4. Document your site's "actions" in plain English

Before WebMCP arrives, prepare by writing down what actions your site has that an agent should be able to trigger. For each one: what information does the action need, what does a successful completion look like, and what should the agent tell the user when it is done? This work becomes your agents.md file and, later, the basis for your WebMCP tool definitions.

The bigger picture: build for humans and agents

The important thing to understand is that this is not about replacing your website. Nobody is suggesting that AI agents will stop humans from visiting your site directly. What is happening is the same shift that happened with mobile: a new type of visitor emerged, with different constraints and different needs, and the businesses that adapted their sites early captured more of that traffic.

In 2010, "mobile-first design" sounded optional. By 2015, it was table stakes. Google penalised sites that were not mobile-friendly in search rankings. Users abandoned sites that did not work on their phones. The businesses that had moved early were fine. The ones that waited scrambled.

The agentic web is following the same curve. Right now, AI agent traffic is a small fraction of your visits. The agents that browse your site now are imprecise and clumsy. But the infrastructure is being standardised - WebMCP is evidence of that - and the adoption of AI assistants as a primary interface for web-based tasks is accelerating. The businesses that add llms.txt today, structure their data thoughtfully, and think about their site's "actions" are laying groundwork that will compound.

The businesses that ignore this will eventually have to retrofit it under pressure, the way companies were retrofitting mobile support in 2013 while their competitors had moved on to the next thing.

The honest assessment

WebMCP is real, the direction is clear, and the underlying shift (AI agents as web users) is already happening. But WebMCP itself is months away from production stability. The right move is not to pause everything and wait for it - it is to start laying the groundwork now (llms.txt, schema.org, auditing your flows) so that WebMCP implementation is a natural next step rather than a rebuild.

We build agent-ready websites and the AI agents that use them

If you want to understand what agent-readiness looks like for your specific site and business model, that is exactly the kind of conversation we have with clients. No jargon, no hype - just what is relevant for you, now.