A Chinese businessman lost 4.3 million yuan [~$612,000] after a fraudster used deepfake AI technology to impersonate his friend, according to local media reports.
The victim, identified only by his surname Guo, received a video call via WeChat last month from someone who both looked and sounded like his close friend.
The caller, who turned out to be a scammer, asked Guo to help him bid on a public project in another city – and Guo promptly wired him the funds.
What really happened?
Guo is the legal representative of a technology company based in Fuzhou, Fujian province. On April 20, the scammer video-called him pretending to be a friend. The fake friend said he wanted to use Guo’s company’s account to submit a 4.3 million yuan bid on a public tender.
As is often the case with such scams, the perpetrator promised to pay Guo back immediately. Guo was persuaded to make the transfer after the culprit sent him a screenshot of a fake payment record. Without checking that he had actually received the money, Guo dispatched two payments from his company account totalling 4.3 million yuan [~$612,000].
When Guo called his real friend for verification after completing the transfers, the friend denied that he’d made the video call or asked him to transfer any money. Guo promptly reported the matter to the police.
“The person on the other side didn’t ask me to lend him money during the chat. He said that he would transfer the money first, and then what I needed to do was transfer his money to his company’s account,” Guo said.
“He chatted with me via video call, and I also confirmed his face and voice in the video. That’s why we let our guard down,” he added.
Police in Fuzhou have managed to recover 3.4 million yuan of the stolen money so far. Once Guo reported the matter, they notified a bank in another city not to proceed with the transfers. Police are working to recover the rest of the money but have yet to identify the culprit.
AI security risks
As AI becomes more advanced, it will become increasingly difficult to distinguish between real and fake identities, according to experts. AI has the potential to undermine the security and privacy of digital identities.
For example, it can be used to create deepfakes, realistic but fake images or videos used to impersonate someone else, including their voice. Deepfakes can be deployed to create false digital identities, something cybercriminals could leverage to commit fraud.
The case of Guo is a reminder that deepfake technology is a powerful tool that can be deployed for good or evil. Regulators have started to pay more attention to the dangers of AI since OpenAI launched its popular chatbot ChatGPT in November, sparking a global AI arms race.
As MetaNews previously reported, experts suggest developing new technologies that can detect and prevent the use of fake identities. This could include the use of biometric data, such as facial recognition or fingerprint scanning, to verify the identity of users.
In China, the government is taking the issue of deepfakes seriously. In January, the country’s Cyberspace Administration issued new regulations that ban the use of AI-generated content to spread “fake news.” The regulations also require providers of deepfake technology to clearly label their products as synthetic.