A finance worker at a Hong Kong-based multinational reportedly lost HK$200 million ($25.6 million) of company money after fraudsters used deepfake AI technology to impersonate the firm’s chief financial officer in a video conference call.
Police said the employee received a message last month from someone claiming to be the company’s London-based CFO. The person, who turned out to be a scammer, asked the worker to join an “encrypted” video call with four to six other staffers.
After initial hesitation, the employee was convinced after the video call because others in attendance looked and sounded like people he knew from work, the Hong Kong Standard reported.
Swindling a finance professional
The fake CFO did not waste time. He swiftly made urgent appeals to facilitate the money transfers. The victim believed everyone else on the video call was real and complied with the instructions, eventually making 15 transfers into five local bank accounts.
In total, the worker agreed to transfer HK$200 million—about US$25.6 million at the time. The employee found out he had been cheated a week later after checking with the company’s head office. He reported the matter to the police.
“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” said Hong Kong police senior superintendent Baron Chan Shun-ching. The police officer did not reveal the name or details of the company or the worker.
According to Chan, the videos were generated using AI and made from past genuine online conferences. To add depth and credibility to the scam, the fraudsters used WhatsApp, email, and one-to-one video conferences with Hong Kong staff members.
“I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference,” Chan said, and in another report, added:
“They used deepfake technology to imitate the voice of their targets reading from a script.”
Police said cases where AI deepfake technology was used to cheat people out of their money have been on the increase in Hong Kong. Between July and September 2023, eight stolen local ID cards were used to make 90 loan applications and 54 bank account registrations, Chan said, as reported by CNN.
According to Hong Kong police, scammers had used AI deepfakes at least 20 times to trick facial recognition software “by imitating the people pictured on the identity cards.” Police have arrested six people in connection with such scams.
AI deepfakes worry global leaders
As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. The technology can undermine the security and privacy of digital identities.
For example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice, as highlighted in the Hong Kong case.
Regulators around the world have started to pay more attention to the dangers of AI since OpenAI launched its viral chatbot ChatGPT in November 2022.
In the U.S., senators introduced a bipartisan bill late last month that would allow victims portrayed in non-consensual AI-generated pornographic deepfakes to sue the creators of the videos.
The decision came after sexually explicit AI-generated images of Taylor Swift went viral on social media, including on X, where tens of millions of people viewed them before the platform formerly known as Twitter blocked searches for the pop singer.
In China, the country’s Cyberspace Administration last year issued new regulations that ban the use of AI-generated content to spread “fake news.” The regulations also require providers of deepfake technology to clearly label their products as synthetic.
In India, IT minister Rajeev Chandrasekhar has recently warned that social media companies would be held accountable for AI deepfakes posted by people on their platforms. This comes after a semi-nude AI-created video of Indian actor Rashmika Mandanna appeared online in November.