Technology

State-Sponsored Threat Actors Incorporate AI into Cyber Operations

Published February 14, 2024

In the realm of cybersecurity, it's not only the defenders who are making use of technological advancements like artificial intelligence (AI). Recent insights from Microsoft and OpenAI have revealed that state-sponsored threat actors are also adopting large language models (LLMs) to augment their cyber-attack strategies.

Exploitation of AI by Cyber Adversaries

These threat groups, backed by their respective nation-states, are employing LLMs to streamline a variety of tasks. Among their objectives, they seek to gather intelligence, perfect their scripting methods, and enhance social engineering exploits. LLMs are leveraged just as much by attackers as they are by defenders in the digital realm, aimed at increasing operational efficiency and probing the potential of these technologies.

Diverse Approaches Across State-Affiliated Groups

Reports have identified distinct ways in which several known state-sponsored groups have utilized LLMs in their operations:

  • Russian military intelligence actor Forest Blizzard (STRONTIUM) – focusing on satellite and military radar technologies related to the conflict in Ukraine, besides honing scripting techniques.
  • North Korean threat actor Emerald Sleet (THALLIUM) – conducting intelligence on think tanks, initiating spear-phishing campaigns, understanding vulnerabilities, managing technical issues, and adapting web technologies.
  • Iranian threat actor Crimson Sandstorm (CURIUM) – gaining support in social engineering, troubleshooting errors, .NET development, and creating evasion code.
  • Chinese state-affiliated threat actor Charcoal Typhoon (CHROMIUM) – crafting tools, refining scripts, researching technologies and vulnerabilities, and fabricating social engineering content.
  • Chinese state-affiliated threat actor Salmon Typhoon (SODIUM) – fixing coding bugs, translating technical documents, and amassing data on sensitive issues and geopolitical affairs.

Although Microsoft and OpenAI have not observed any particular new AI-enabled attack techniques sprung from such AI usage, they closely monitor for significant threats and are proactively sharing their findings with the cybersecurity community to prevent and counter such maneuvers.

Preventive Measures and Recommendations

As part of their commitment to oversight and rectification, both organizations have deactivated accounts linked to these adversarial entities. Microsoft strongly advocates for adaptation by including AI themes within frameworks like MITRE ATT⊃CK. It will inform other AI service entities about the malicious use of AI tools, and enhance collaboration amongst stakeholders to build a robust defense against these threats. Additionally, they stress the importance of fundamental security measures such as multifactor authentication (MFA) and zero trust strategies, given the potential for AI to refine attacks based on social engineering and exploiting vulnerabilities in devices and accounts.

cybersecurity, AI, threats