The Wire
Field notes·4 May 2026·4 min

How do you verify the operator of a Custom GPT?

Custom GPTs ship with a name and an avatar. Neither tells you who actually built it. Here is the lightest-weight way to bind a Custom GPT to a real, verifiable operator — and why we settled on it after looking at every alternative.

AI Identity

The problem

If you open a Custom GPT in ChatGPT, you can read its name, its description, and its instructions. None of those fields are verified. Anyone can spin up "Acme Corp Support Bot" tomorrow morning and put a perfectly believable description on it. The end-user has no way to tell whether they're talking to Acme or to someone impersonating Acme.

Same problem on every other no-code AI platform: Gemini Gems, Claude Projects, Poe bots, Vapi voice agents. The platform owns the runtime; you only own a few user-editable fields.

What does not work

We considered three patterns before landing on the current approach:

  1. Inject a signed JWT into the system prompt. Tried it. The prompt is visible to the end-user, and OpenAI doesn't guarantee the system prompt's exact text reaches the model unaltered. JWTs aren't designed to be human-readable, and asking users to ignore a 600-character soup of base64 in the GPT description is a bad pattern.
  1. Verify by checking the GPT's Action server. Works only if the GPT has Actions configured. Most Custom GPTs don't. Voice agents (Vapi, Retell) are the same.
  1. Manual review of every Custom GPT. Doesn't scale, and we have no reliable signal that a manual review yesterday is still valid today.

What does work

A short, human-readable reference line that the operator pastes into the one user-editable field they always control:

AI Identity: aii://aii_01HX... · verify at aiidentity.org/p/aii_01HX...

That line goes in the GPT's Description field (or the Gem's description, the Claude Project's custom instructions, the Vapi agent's bio — same pattern everywhere). We periodically fetch the platform's public surface for that assistant and confirm the reference is rendered there. If it disappears, the binding lapses.

This is deliberately the same trust model as a domain TXT record: the binding is "whoever can write to this field is the operator we attest to." The cryptographic Passport JWT lives separately, on outbound HTTP headers (`X-AI-Identity`) when the platform supports them. The reference line is the lightest thing that survives third-party hosting.

How it looks end-to-end

  1. Operator creates an AI Identity in the dashboard (free for individuals).
  2. Operator pastes the reference line into the Custom GPT description.
  3. Operator submits the GPT's public share URL in their AI Identity dashboard.
  4. We fetch the share page within minutes and confirm the reference appears.
  5. End-users (or, more importantly, journalists, regulators, and other AI agents) can now look up the GPT by its share URL via WHOIS and get a verified-operator answer.

Per-platform install steps

We've catalogued the install pattern for every major hosted-assistant platform — Custom GPT, Gemini Gem, Claude Project, Claude Code, Cursor, Poe, Vapi, Retell, ElevenLabs Conversational, Zapier, Make, n8n, Lindy — at [/spec/integrations/hosted-assistants](/spec/integrations/hosted-assistants). Each one is one anchor on a single hub page; the universal pattern is the same, the platform-specific notes are short.

For platforms where you do run your own server (MCP, OpenCLAW, A2A, raw HTTP, Telegram, WhatsApp), the install is even simpler: an HTTP header. See [/spec/integrations](/spec/integrations) for those guides.

The bigger point

We don't need 13 separate verification protocols for 13 platforms. We need one universal rule (`paste the reference into the field you control`) that AI search engines and verifiers can recognise. The boring layer makes the exciting layer safe.

From AI Identity

We're the registry for verified AI agents. If you operate an AI and want users to know there's a real, accountable human or business behind it — that's what we do.