Search

North Korean Cyber Threat Escalates with the Adoption of Generative AI

North Korean Cyber Threat Escalates with the Adoption of Generative AI

North Korean hackers integrate AI into cyberattacks, leveraging generative AI for phishing and social engineering, posing new challenges.

Notably, the cybercriminals are using artificial intelligence (AI) as part of their strategy to steal technology and money for the nation’s covert nuclear weapons programme.

The hackers are renowned for operations like the Bangladesh central bank heist and the WannaCry ransomware attack on the UK’s National Health Service in 2017. They have previously targeted international defence, cybersecurity, and cryptocurrency company personnel.

OpenAI and Microsoft reveal how threat uses AI

OpenAI and Microsoft confirmed that their AI services are used by hackers from North Korea, China, Russia, and Iran for malicious cyber activities. However, a new challenge has surfaced as South Korea has identified North Korean hackers targeting security officials with generative AI.

While North Korean hackers were limited in their ability to converse in English or Korean, generative AI allows them to create genuine profiles on platforms like LinkedIn, improving their phishing and social engineering operations.

Microsoft said it worked with OpenAI to identify and neutralize many threats that used or attempted to exploit the AI technology they had developed.

In a blog, Microsoft claimed that although the techniques were in their early stages and neither particularly novel nor unique, it was vital to expose them publicly as US competitors are using large-language models to increase their capacity for network breaches and influence operations.

Defence cybersecurity companies have long used machine learning, principally to identify unusual network activity. However, offensive hackers and criminals also use it, and the cat-and-mouse game has stepped up with the introduction of large-language models led by OpenAI’s ChatGPT.

Using Generative AI

North Korean hackers can pretend to be recruiters, trick targets into doing technical exercises, and end up installing spyware by using generative AI. Hackers from North Korea use platforms like LinkedIn, Facebook, WhatsApp, and Discord.

ChatGPT and other AI services may help North Korean hackers create more advanced malware or dangerous software. Even though there are precautions to stop misuse, people have developed ways to get around them. North Korea has made investments in enhancing its cyber capabilities, leveraging proceeds from illicit cyber operations to finance its nuclear and ballistic missile projects. The country has access to Chinese artificial intelligence services.

AI program in North Korea

The National Intelligence Service issued a warning in 2024, stating that North Korea’s AI capabilities could result in more severe and focused attacks.

According to a study, North Korea has a developed AI ecosystem, with both government and private entities having advanced machine learning skills.

During the COVID-19 pandemic, North Korea used AI tools to monitor mask compliance and track symptom detection. Its agencies have used pattern optimization in nuclear safety and wargaming simulations.

Private companies in North Korea claim to have incorporated deep neural network technology into security surveillance systems with intelligent IP cameras, enabling fingerprint, voice, facial, and text recognition on mobile phones.

The study’s author, Kim Hyuk, said North Korea’s comprehensive AI/ML development strategy covers the government, academic, and commercial sectors. He added that North Korea has demonstrated a comprehensive approach to developing its AI and ML capabilities across industries.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.

Welcome

Install
×