British researchers have developed a new AI-powered hacking technique that can crack bank account numbers and passwords with up to 95% accuracy.
According to a preprint of the study posted on arXiv, the AI can learn different sounds that each key makes when people type on their keyboards. It then uses this knowledge to predict what words people are typing based on the sounds it hears.
Even when you are typing quickly or quietly, the AI can still achieve a high degree of accuracy, researchers say. Hackers could leverage the model to steal sensitive information, such as passwords, using just a microphone.
‘Hacking is leveling up’
The research came from Durham University’s Joshua Harrison, the University of Surrey’s Ehsan Torcini, and Maryam Mehrnezhad from Royal Holloway University. The study is titled: ‘A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards.’
Scientists used a smartphone microphone to record keystrokes on an Apple MacBook Pro and were able to reproduce the exact keys that were pressed with 95% accuracy. An iPhone 13 mini, kept 17cm away from the keyboard, was used for this test.
The MacBook Pro was used to simulate a person typing on a keyboard. Researchers pressed each of the 36 individual keys 25 times so that the AI program could learn the patterns of how each key sounded when pressed, allowing it to generate text more accurately.
The scientists also wanted to see how well the AI could do during a real-world scenario. So, they recorded the keystrokes from a laptop during a Zoom call, using the MacBook’s built-in microphone. The model reproduced the keystrokes with 93% accuracy. In Skype, it was 91.7% accurate.
“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model,” the study said.
The scientists said the new AI hacking technique is made possible by the increasing number of microphones that are now within acoustic range of keyboards, like the ones found in laptops, smartphones, and smart speakers.
Avoiding AI cyberattacks
For the study, researchers trained an image classifier from Google Brain called “CoAtNet.” They experimented with different learning, data splitting, and epoch parameters to improve prediction accuracy. The method allowed the AI model to identify keystrokes.
According to the researchers, the combination of AI, microphones and video calls “present a greater threat to keyboards than ever.”
Hacking is leveling up.
— Nagato (@NagatoDharma) August 5, 2023
However, the scientists say that the AI program does not work equally well for all keyboards. They say the model must be trained with additional references, such as a list of keyboard layouts, to help it understand the character to which each keystroke corresponds.
Users of keyboards that produce audible clicks when typing face the greatest risk. Membrane-based keyboards, which are silent, aren’t safe either. But touch typing reduces the AI model’s accuracy by between 40% to 64%, said the study.
Researchers suggested people use randomly generated passwords that include a variety of uppercase and lowercase letters, numbers, and symbols. People can also use software that reproduces or filters keystroke sounds or play white noise to muddle the AI’s accuracy.