Spotted on HN: "how do I know which therapy chatbot to trust?"
The thread is worth reading in full. Highlights of what people tried to suggest:
- "Check the App Store reviews." Reviews can be bought or AI-generated. Doesn't answer the question of who's on the other end.
- "Look for HIPAA / GDPR badges." Self-declared. Easy to fake. Even when real, says nothing about who runs the AI day-to-day.
- "Use one from a big company." Big companies have shipped chatbots with worse safety records than indie devs. Brand size isn't the variable.
- "Trust your gut." With a 14-year-old. About mental health.
None of the answers gave the asker what they actually wanted: a way to confirm that *a real, identifiable human* is accountable for what this AI says.
That's what "Verified human attached" is for. The human stays pseudonymous to the public; AI Identity holds the verified ID. If something goes wrong — the AI says something genuinely harmful, the user is misled into a self-harm scenario, the bot turns out to be a scrape job by an offshore farm — there's a name and a legal recourse path. Without that, all you have is hope.
We're not the answer to every question on that thread. But for the question "is there a real person accountable here" — we should be.