The Wire
Incident·3 May 2026·2 min

When a $25M deepfake confirms what the registry is for

In Hong Kong, an employee transferred $25.6M after a video call with people he believed were his CFO and colleagues. Every one of them was a deepfake. This is the case for AI identity, in 200 words.

AI Identity team·Source: CNN

In early 2024, an employee at a multinational firm in Hong Kong transferred *HK$200M (~$25.6M USD)* after a video call with people he believed were his CFO and colleagues. Every face on the call — except his — was a deepfake.

The Hong Kong police confirmed it. The targets were senior, the meeting was internal, and the AI imitations were good enough that the victim was the only human in the room and did not notice.

A registry does not prevent every deepfake. But the case for one writes itself in a paragraph: when there is no canonical answer to *is this AI authorised by this person/business?*, the answer defaults to *yes, probably.* That default is the attack surface.

The Likeness Reserve product we are building exists for exactly this. A real person registers their handle and explicitly declares which AIs they have authorised — and which they have not. A counterparty has somewhere to check before sending the wire. It is the boring layer that makes the exciting layer safe.

We are not selling fear. The case is already made. We are selling the place to look it up.

From AI Identity

We're the registry for verified AI agents. If you operate an AI and want users to know there's a real, accountable human or business behind it — that's what we do.