Microsoft Accuses Hackers from China, Russia, and Iran of Exploiting Its AI

Microsoft recently published a report accusing several groups with alleged ties to Russian military intelligence, Iran, China, and North Korea of using its large language models (LLMs) to refine their hacking attacks. The tech giant announced this development while implementing a full ban on state-sponsored hacker groups from using its AI technology.

“Regardless if any laws are broken or any terms of service are violated, we simply don’t want entities that we have identified, tracked and are known to be threats from having access to this technology,” said Tom Burt, Microsoft’s Vice President for Customer Security, in an interview with Reuters prior to the report’s publication.

“This is one of the first, if not the first case, of an AI company publicly discussing how cyber threat actors are using AI technology,” noted Bob Rotsted, the Director of Threat Analysis at OpenAI.

Both OpenAI and Microsoft did mention that the hackers’ use of AI tools is still in the early stages, without any groundbreaking outcomes noted. “They are merely using this technology like any other user,” stated Burt.

Despite similarities in usage, Microsoft’s report highlights diverse objectives among different hacking groups. For instance, groups that are reportedly linked with the GRU (Russian military intelligence) are utilizing LLMs for researching various satellite and radar technologies potentially related to ongoing military operations in Ukraine. North Korean hackers have been employing LLMs to generate content likely intended for target phishing campaigns against regional experts. The Iranian hacking contingent used these models for crafting more convincing emails to potential victims, whereas Chinese hackers are experimenting with LLMs to inquire about competing intelligence services, cybersecurity issues, and “prominent personalities.”

Related Posts