Fake "customer support" AI agents are the new phishing email
The pattern is consistent across the reports we're tracking:
- A user posts a complaint about a real product on X or Reddit.
- Within minutes, a "helpful" account replies offering to escalate to support.
- The DM lands them in a Telegram chat with a polished AI agent claiming to be official support.
- The agent asks for credentials, recovery phrases, or wallet signatures — framed as identity verification.
The AI's responses are grammatically clean, branded correctly, and replicate the tone of real support staff. There's no broken English, no obvious tells. Cost to spin up: roughly $0.
Why this is hard to defeat with current tools
- The Telegram/Discord handle can match the brand exactly because the platforms don't reserve names by trademark
- A logo can be lifted from the real site
- Domain typosquatting catches a few but the agent doesn't need to host a site at all
- Two-factor doesn't help when the user is the one being talked into pasting credentials
What "verified" should look like
A support agent operated by a real bank should be able to point at something cryptographic: a Passport URL the bank's real domain links to, a public key signed by the bank's issuer authority, a verification chain a user can check in three clicks.
None of the platforms ship this today. Until they do, the floor on "is this AI legit" is the user's ability to spot tone differences — which AI removes.
This is exactly the gap AI Identity is built to close: a single, cryptographically-verifiable identity layer that follows an AI across whatever surface it operates on, with a public WHOIS anyone can check.