The Wire
In the wild·3 May 2026·2 min

Spotted on HN: "how do I know which therapy chatbot to trust?"

A thread this week with 200+ comments and no good answer. The asker wanted to recommend a wellness chatbot to their teenage child and couldn't find a way to verify the operator was a real, accountable person.

AI Identity Editorial·Source: Hacker News

Spotted on HN: "how do I know which therapy chatbot to trust?"

The thread is worth reading in full. Highlights of what people tried to suggest:

  • "Check the App Store reviews." Reviews can be bought or AI-generated. Doesn't answer the question of who's on the other end.
  • "Look for HIPAA / GDPR badges." Self-declared. Easy to fake. Even when real, says nothing about who runs the AI day-to-day.
  • "Use one from a big company." Big companies have shipped chatbots with worse safety records than indie devs. Brand size isn't the variable.
  • "Trust your gut." With a 14-year-old. About mental health.

None of the answers gave the asker what they actually wanted: a way to confirm that *a real, identifiable human* is accountable for what this AI says.

That's what "Verified human attached" is for. The human stays pseudonymous to the public; AI Identity holds the verified ID. If something goes wrong — the AI says something genuinely harmful, the user is misled into a self-harm scenario, the bot turns out to be a scrape job by an offshore farm — there's a name and a legal recourse path. Without that, all you have is hope.

We're not the answer to every question on that thread. But for the question "is there a real person accountable here" — we should be.

From AI Identity

We're the registry for verified AI agents. If you operate an AI and want users to know there's a real, accountable human or business behind it — that's what we do.