Most websites are invisible to AI agents. They can't be found, understood, or acted upon. We built a four-layer architecture that fixes this — and we built it for our own site first.
Everything you see on this page — the files, the API endpoints, the MCP server — is live and running at p0stman.com. You're looking at the proof of concept.
Single-page apps render <div id="root">. There is no content for a crawler to read. LLMs see nothing. Agents see nothing.
Even crawlable sites bury answers in prose. LLMs can't cite what they can't parse. No schema, no citations.
Humans click buttons. Agents need tools. Without an MCP server or WebMCP, agents have no programmatic way in.
Each layer builds on the last. All four are needed. All four are live on this site.
Before an agent can do anything, it needs to find and index the site. This layer is about discoverability.
Standard index file for LLMs — lists every page with descriptions and categories
Tells agents what they can do on this site, with full JSON-RPC examples
Complete company context in a single URL — shareable to any AI conversation
145 URLs indexed; submitted to Bing via IndexNow on every deploy
16 AI crawlers explicitly allowed: GPTBot, ClaudeBot, TavilyBot, PerplexityBot and more
Next.js App Router — every page renders real HTML. No <div id="root"> for crawlers to bounce off.
Being findable isn't enough. Agents need structured, machine-readable data — not prose they have to parse.
Full company data as structured JSON: services, case studies, pricing, contact methods
Every service with name, slug, description, price_from, currency, timeline, URL
All case studies with industry, summary, tech stack, timeline, URL
Organization, Service, Article, FAQPage, CaseStudy markup on every page. Crawlable by any agent.
The direct answer is in the first 30% of every content page — the part LLMs cite 44% of the time.
The third layer is where the architecture becomes genuinely different. Agents can book calls, search the portfolio, and submit enquiries — without any human in the loop.
Full MCP server — JSON-RPC 2.0 protocol. Any MCP-compatible agent can call it.
Discovery manifest. Agents find the tool list here.
Schedule a 30-minute discovery call. No human needs to be involved.
Submit a project enquiry directly into the CRM.
Browse services and case studies as structured JSON.
WebMCP — all 5 tools registered for browsers with the flag enabled (Chrome 146+). Coming mainstream.
Layer 4 is the full handshake. Other AI agents — not just tools — can discover Zero, send tasks, and receive structured responses. Zero acts as an autonomous agent peer.
AgentCard — the machine-readable manifest any A2A-compatible agent discovers first. Name, skills, endpoint, capabilities.
A2A JSON-RPC task endpoint. POST a task, Zero calls Gemini and responds. Real inference with company context injected.
Three declared skills — any orchestrating agent knows what Zero can do before sending a task.
Public discovery endpoint. Any A2A-compatible agent can reach Zero — no API key required.
All A2A interactions logged to Supabase agent_sessions — agent UA, task, response, timestamp.
{
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": "demo-task",
"message": {
"role": "user",
"parts": [
{
"text": "What AI services do you offer for fintech startups?"
}
]
}
}
}This is calling the real /api/mcp endpoint right now, from your browser. Any AI agent with MCP support can do exactly this — no human needed.
Schedules a free 30-minute discovery call with Paul. Request is stored in the CRM immediately. No human action required to receive it.
Submits a detailed project enquiry. Equivalent to filling in the contact form — stored in Supabase, Paul is notified.
Returns all POSTMAN services with name, description, price_from, currency, timeline, and URL. Optionally filter by category.
Returns all case studies with title, industry, summary, tech stack, and URL. Optionally filter by industry (e.g. hospitality, fintech).
Searches across services and case studies by keyword. Returns matched service names and case study titles with URLs.
Returns the full list of available tools with their input schemas. Standard MCP discovery — any agent should call this first.
{
"jsonrpc": "2.0",
"method": "tools/list",
"id": 1,
"params": {}
}This fires a real A2A tasks/send request to /api/agent. Zero (Gemini 2.0 Flash) processes it and responds. Not a mock — real inference, right now.
{
"jsonrpc": "2.0",
"id": 1,
"method": "tasks/send",
"params": {
"id": "demo-task",
"message": {
"role": "user",
"parts": [
{
"text": "What AI services do you offer for fintech startups?"
}
]
}
}
}A clinic manager in Dubai asked ChatGPT about voice agents for appointment booking. ChatGPT cited p0stman.com/locations/dubai/ — specifically the answer capsule at the top of that page, which directly answered her query.
She clicked through, read the page, and submitted a project enquiry the same session. The enquiry converted to a discovery call. Zero ad spend. Zero cold outreach. No SEO campaign. The architecture did the work.
The answer capsule on that page said: "POSTMAN builds AI voice agents for Dubai clinics and medical centres — handling appointment booking, patient follow-ups, and multilingual reception in Arabic and English." That sentence matched her query closely enough for ChatGPT to surface it as the answer.
LLM-referred visitors convert 4.4× better than organic search (Adobe, 2025). Bounce rate is 45% lower — they arrive with high intent, having already had the question answered by an AI before clicking.
We track LLM source in our own Supabase analytics — UTM source, referrer domain, and traffic category. ChatGPT automatically appends ?utm_source=chatgpt.com to links it cites, so attribution is captured even when the referrer header is stripped.
Most sites don't track this at all. Most sites don't even render content that LLMs can read. That's the gap this architecture closes.
No human in the loop. No form-filling. Pure agent-to-server communication.
An agent representing a client can call book_discovery_call with name, email, and project description. The request lands directly in our CRM.
submit_inquiry with a detailed brief. Same result as filling in the contact form — stored in Supabase, Paul gets notified.
get_services, get_portfolio, search_content — any agent doing research on AI studios can pull structured data without scraping HTML.
The architecture is proven. We built it on our own site. Every component — the MCP server, the AI endpoints, the schema markup, the llms.txt, the IndexNow pipeline — is a repeatable pattern we can apply to any existing website or new build.
AI assistants, agent search engines, and reasoning models are replacing the query box. If your content isn't structured for them — if it's not in their training data, not indexable by their crawlers, not callable by their agents — you're invisible to an audience that is already making purchasing decisions without a single click on Google.
This is not the future. It is already happening. ChatGPT cites sources in every response. Perplexity crawls and indexes in real time. Claude reads your llms.txt before every session. Grok mines X for brand signals. Gemini surfaces structured schema directly in Google results. The question is not whether these systems will find your business — it is whether they will find it accurately, trust it, and act on it. That is exactly what the four-layer architecture solves.