Most websites are invisible to AI agents. They can't be found, understood, or acted upon. We built a four-layer architecture that fixes this — and we built it for our own site first.
Everything you see on this page — the files, the API endpoints, the MCP server — is live and running at p0stman.com. You're looking at the proof of concept.
Single-page apps render <div id="root">. There is no content for a crawler to read. LLMs see nothing. Agents see nothing.
Even crawlable sites bury answers in prose. LLMs can't cite what they can't parse. No schema, no citations.
Humans click buttons. Agents need tools. Without an MCP server or WebMCP, agents have no programmatic way in.
A clinic manager in Dubai asked ChatGPT about voice agents for appointment booking. ChatGPT cited p0stman.com/locations/dubai/ — specifically the answer capsule at the top of that page, which directly answered her query.
She clicked through, read the page, and submitted a project enquiry the same session. The enquiry converted to a discovery call. Zero ad spend. Zero cold outreach. No SEO campaign. The architecture did the work.
The answer capsule on that page said: "p0stman builds AI voice agents for Dubai clinics and medical centres — handling appointment booking, patient follow-ups, and multilingual reception in Arabic and English." That sentence matched her query closely enough for ChatGPT to surface it as the answer.
LLM-referred visitors convert 4.4x better than organic search (Adobe, 2025). Bounce rate is 45% lower — they arrive with high intent, having already had the question answered by an AI before clicking.
We track LLM source in our own Supabase analytics — UTM source, referrer domain, and traffic category. ChatGPT automatically appends ?utm_source=chatgpt.com to links it cites, so attribution is captured even when the referrer header is stripped.
Most sites don't track this at all. Most sites don't even render content that LLMs can read. That's the gap this architecture closes.
Each layer builds on the last. All four are needed. All four are live on this site.
Before an agent can do anything, it needs to find and index the site. This layer is about discoverability.
Standard index file for LLMs — lists every page with descriptions and categories
Tells agents what they can do on this site, with full JSON-RPC examples
Complete company context in a single URL — shareable to any AI conversation
145 URLs indexed; submitted to Bing via IndexNow on every deploy
16 AI crawlers explicitly allowed: GPTBot, ClaudeBot, TavilyBot, PerplexityBot and more
Next.js App Router — every page renders real HTML. No <div id="root"> for crawlers to bounce off.
No human in the loop. No form-filling. Pure agent-to-server communication.
An agent representing a client can call book_discovery_call with name, email, and project description. The request lands directly in our CRM.
submit_inquiry with a detailed brief. Same result as filling in the contact form — stored in Supabase, Paul gets notified.
get_services, get_portfolio, search_content — any agent doing research on AI studios can pull structured data without scraping HTML.
The architecture is proven. We built it on our own site. Every component — the MCP server, the AI endpoints, the schema markup, the llms.txt, the IndexNow pipeline — is a repeatable pattern we can apply to any existing website or new build.
AI assistants, agent search engines, and reasoning models are replacing the query box. If your content isn't structured for them — if it's not in their training data, not indexable by their crawlers, not callable by their agents — you're invisible to an audience that is already making purchasing decisions without a single click on Google.
This is not the future. It is already happening. ChatGPT cites sources in every response. Perplexity crawls and indexes in real time. Claude reads your llms.txt before every session. Grok mines X for brand signals. Gemini surfaces structured schema directly in Google results. The question is not whether these systems will find your business — it is whether they will find it accurately, trust it, and act on it. That is exactly what the four-layer architecture solves.
Comprehensive guides on every layer of the agentic web stack.