How Agent-Ready Is Your Domain? 11 Signal Layers That AI Agents Check

April 5, 2026 · OracleNet · 7 min read

The internet is moving from pages to protocols. AI agents don't browse websites — they read machine-readable signals: Agent Cards, DIDs, OpenAPI specs, payment protocols, trust documents. Most domains send almost none of these signals. We built a scanner that shows exactly where you stand.

→ Scan your domain now (free, no registration)

The Problem: Invisible Domains

In 2026, AI agents are becoming autonomous actors. Google's A2A protocol, Anthropic's MCP, the IETF's Agent Discovery drafts — the infrastructure for machine-to-machine coordination is being built right now. But there's a gap: most domains are invisible to these agents.

An AI agent looking at a typical domain sees a TLS certificate and maybe a robots.txt. That's it. No identity. No capabilities. No pricing. No trust signals. No way to know if this domain is a potential partner or just another website.

The standards exist. /.well-known/ (RFC 8615) has been available since 2019. A2A Agent Cards, W3C DIDs, OpenAPI specs, security.txt (RFC 9116) — all of these are machine-readable signals that a domain could be sending. Most don't.

11 Signal Layers: What Agents Look For

We mapped the landscape of machine-readable signals into 11 layers. Each layer answers a specific question that an AI agent needs answered before it can coordinate with another system:

LayerSignalQuestionStandards
S0FrequencySame clock?Cache-Control, ETag, custom epoch
S1PresenceAlive?HTTP response, robots.txt, sitemap
S2IdentityWho?W3C DID, JWKS, security.txt, TLS
S3CapabilityWhat can you do?Agent Card, OpenAPI, llms.txt, MCP
S4IntentWhat do you need?Skills with examples, discovery endpoints
S5OfferWhat does it cost?x402, pricing in Agent Card, offer catalog
S6DealCan we close?x402 settlement, deal endpoints, AP2
S7ExecutionDelivered?API endpoints, MCP tools, A2A tasks
S8ProofVerifiable?JWKS signing, content hashing, blockchain anchors
S9ReputationTrustworthy?Reputation endpoints, rating systems
S10ImmuneSafe?Rate limiting, threat detection, security policies

This isn't a proprietary framework — it's a reading of what already exists across A2A, MCP, IETF discovery drafts, and established web standards. The scanner checks for public, standard-compliant signals at each layer.

What We Found When We Scanned

We ran the scanner across domains in the AI agent ecosystem. The results were striking:

DomainLayersCoverageNotable Signals
openai.com3/1127%Presence, partial Identity
stripe.com4/1136%Presence, Identity, partial Capability
coinbase.com4/1136%Presence, Identity, partial Capability
ripple.com3/1127%Presence only
langchain.com2/1118%Presence only
modelcontextprotocol.io6/1155%Presence, Identity, Capability, Execution

Even the websites of companies building agent protocols score below 55%. The signals exist in the standards — they're just not deployed.

Note: These results are a snapshot of public signals found via standard HTTP probes. They don't reflect internal capabilities — only what's publicly machine-readable. A domain might have excellent APIs but no /.well-known/ files advertising them.

What Each Layer Means

S0 — Frequency: Synchronization

Are you on the same clock? Cache-Control headers, ETags, and epoch signals tell an agent how fresh your data is and when to come back. Without this, an agent doesn't know if your content is from today or last year.

S1 — Presence: Basic Reachability

The simplest signal: the server responds. Robots.txt and sitemap.xml have been machine-readable presence signals since the 1990s. Almost everyone has this.

S2 — Identity: Cryptographic Proof

A W3C DID (/.well-known/did.json) proves who you are with cryptographic keys. JWKS (/.well-known/jwks.json) lets agents verify your signatures. Security.txt (RFC 9116) shows you take security seriously. Most domains only have TLS certificates here — the bare minimum.

S3 — Capability: What You Offer

This is where Agent Cards (/.well-known/agent.json or agent-card.json), OpenAPI specs, and llms.txt come in. They tell an agent exactly what tools, skills, and APIs you provide. Without these, an agent has to guess from your HTML — which it's not designed to do.

S4 — Intent: Understanding Needs

Agent Cards with skill examples and tags help agents match their needs to your capabilities. Discovery endpoints with structured routing go further. This is one of the weakest layers across the ecosystem.

S5/S6 — Offer and Deal: Commerce

The x402 payment protocol, pricing in Agent Cards, and deal endpoints enable agent-to-agent commerce. Google's AP2 (Agent Payment Protocol) addresses this from the authorization side. Almost no one has deployed these yet.

S7 — Execution: Can You Deliver?

API endpoints, MCP servers, and A2A task endpoints prove you can actually do something, not just describe it.

S8 — Proof: Verifiable Results

JWKS keys for signature verification, content hashing, and blockchain anchoring prove that results are authentic and untampered. Critical for compliance and trust, rarely implemented.

S9 — Reputation: Track Record

Machine-readable reputation endpoints let agents check your track record before engaging. Almost completely absent from the current ecosystem.

S10 — Immune: Self-Protection

Rate limiting headers, security policies, and threat detection signals show that a domain can protect itself — and by extension, the agents that connect to it.

Try It Yourself

The Signal Scanner is free, requires no registration, and runs against any public domain:

Scan Your Domain

See your signal profile across all 11 layers. Takes about 10 seconds.

Open Signal Scanner → Use via MCP (quantum_scan)

The scanner reads only public signals — no authentication required, no data stored, no invasive probing. It checks the same /.well-known/ files and HTTP headers that any AI agent would check.

For Developers: Use It Programmatically

The same scanner is available as an MCP tool. Any AI agent can call it:

POST https://tooloracle.io/quantum/mcp/
Content-Type: application/json

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "quantum_scan",
    "arguments": {
      "domain": "your-domain.com"
    }
  }
}

Returns a complete signal profile with strength percentages, evidence details, identified gaps, and a structured signal_profile object for each of the 11 layers.

How to Improve Your Score

The most impactful steps, roughly in order of effort:

  1. Add an Agent Card/.well-known/agent.json with your capabilities, following the A2A spec. This alone jumps you from 2 layers to 4+.
  2. Add a DID document/.well-known/did.json with verification methods. Gives you cryptographic identity.
  3. Add security.txt/.well-known/security.txt per RFC 9116. Takes 30 seconds.
  4. Add llms.txt — A text file at the root describing your service for LLMs. Simple and effective.
  5. Publish OpenAPI — If you have APIs, make the spec discoverable at /.well-known/openapi.json.
  6. Add JWKS/.well-known/jwks.json for signature verification. Enables the Proof layer.

Each addition makes your domain more visible and more useful to the growing ecosystem of AI agents.

The Bigger Picture

The internet is getting a new layer. Not a new protocol — a coordination layer built on signals that already exist but are rarely deployed together. Agent Cards, DIDs, payment protocols, trust documents — the standards are there. The deployment is lagging.

The domains that deploy these signals early will be the ones AI agents find first, trust first, and transact with first. The ones that don't will be as invisible to agents as a website without SEO is to Google.

This isn't about any single standard winning. It's about deploying the signals that make machine-to-machine coordination possible — regardless of which specific protocol an agent uses.

→ Scan your domain and see where you stand

OracleNet · Manifest · GitHub · Live Pulse
OracleNet Signal Theory maps existing web standards into a coherent agent-readiness framework.
© 2026 FeedOracle Technologies