Exploiting advanced AI tools for hacking and cyber espionage. Microsoft and OpenAI report US adversaries and their tactics and targets.

The newest wave of artificial intelligence (AI) tools is being utilized by Russia, China, and other US rivals to advance their hacking capabilities and identify new targets for online espionage, per a report released by Microsoft and OpenAI.

This report, unveiled on Wednesday, sheds light on the unprecedented integration of large language models (LLM) by top-tier government hacking teams for specific nefarious purposes. It also delves into the countermeasures being developed to combat these threats, adding to the ongoing discourse about the risks associated with the rapid advancement of AI technology and the global efforts to regulate its use.

The report highlights the utilization of AI by two Chinese government-affiliated hacking groups and by hacking groups linked to Russia, Iran, and North Korea. These countries, deemed as primary concerns for Western cyber defenders, have been identified as actively leveraging AI for cyber-espionage activities, presenting an increasing threat to cybersecurity.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft stated in its findings.

The integration of AI has elevated the sophistication and scope of cyber-attacks, posing a dire challenge to global cybersecurity efforts and compounding cybersecurity professionals’ challenges. 

In response to the escalating threat posed by AI-enhanced cyber espionage, concerted efforts are being made to develop robust countermeasures to mitigate the risks associated with these advanced tools. The imperative need to stay ahead of the evolving threat landscape is driving intensive research and development of AI-powered defense mechanisms.

According to their statement, Microsoft found no major AI-powered attacks but has seen data detailing security flaws, defenses, and potential targets.

Microsoft’s director of threat intelligence strategy, Sherrod DeGrippo, noted that the company would not necessarily observe everything from that research and that blocking some accounts would not prevent attackers from creating new ones.

“Microsoft does not want to facilitate threat actors perpetrating campaigns against anyone,” she said. “That’s our role, to hit them as they evolve.”

State-sponsored hacking groups in the report include a top Russian team associated with the GRU military intelligence agency. This team used AI to research satellite and radar technologies that might be relevant to conventional warfare in Ukraine.

(February 15, 2024). AVIXA. Retrieved from https://xchange.avixa.org/posts/microsoft-and-openai-us-adversaries-employ-ai-to-amplify-cyberattacks?channel_id=command-and-control