In a new briefing issued this week, software giant Microsoft claims that US rivals such as Iran, Russia and North Korea are preparing to step up their cyberwar efforts using modern generative AI. The problem is aggravated, it adds, by a chronic shortage of skilled cybersecurity personnel. The briefing quotes a 2023 ISC2 Cybersecurity Workforce Study which says that roughly 4 million additional support staff will be required to cope with the upcoming onslaught. Microsoft’s own studies in 2023 highlighted a huge rise in password attacks over two years from 579 per second to over 4000 a second.
The company’s response has been the roll-out of CoPilot For Security. This AI tool is designed to track, identify and block these threats, but faster and more effectively than humans can. For example, a recent test showed that the use of generative AI helped security analysts, regardless of expertise level, to operate 44% more accurately and 26% faster in dealing with all types of threats. Eighty six percent also said that AI made them more productive and reduced the effort needed to complete their tasks.
Unfortunately, as the company acknowledges, the use of AI is not restricted to the good guys. The explosive rise in the technology is leading to an arms race, as threat actors look to leverage the new tools to do as much damage as they can. Hence the release of this threat briefing to warn against the upcoming escalation. The briefing confirms that OpenAI and Microsoft are partnering together to detect and tackle these bad actors and their tactics as they emerge in force.
The impact of generative AI has had on cyber attacks is widespread. In 2023, Darktrace researchers found that there was a 135% increase in email-based so-called ‘novel cyber attacks’ in January to February 2023, which coincided with the widespread adoption of ChatGPT. Additionally, a rise in phishing attacks that were linguistically complex and used an increased amount of words, longer sentences and more punctuation was discovered. This all led to a 52% increase in email account takeover attempts, with attackers realistically posing as the IT team in victims’ organizations.
The report outlines three main focus areas which are likely to consume increasing amounts of AI in the near future. Improved reconnaissance of targets and weaknesses, enhanced malware coding using sophisticated AI coders, and help with learning and planning. The huge compute resources needed inevitably means that the early adopters of the technology will almost certainly be nation states.
Several such cyberthreat entities are specifically mentioned. Strontium (or APT28) is a highly active cyber-espionage group which has been operating out of Russia for the past twenty years. It goes under a number of labels, and is expected to dramatically increase its use of advanced AI tools as they become available.
North Korea also has a huge cyber-espionage presence. Some reports say that over 7000 personnel have been running continual threat programs against the West for decades – with an increase in activity of 300% since 2017. One such group is The Velvet Chollima or Emerald Sleet operation, which primarily targets academic and NGO operations. Here, AI is being increasingly used to improve phishing campaigns and test vulnerabilities.
The briefing highlights two other major players in the global cyberwar arena, Iran and China. These two countries have also been increasing their use of language learning models (LLMs), primarily to research opportunities, and gain insight into possible areas of future attack. As well as these geo-political attacks, the Microsoft briefing outlines increased use of AI in more conventional criminal activities, such as ransomware, fraud (especially through the use of voice cloning), email phishing and general identity manipulation.
As the war heats up, we can expect to see Microsoft, and partners like OpenAI, develop an increasingly sophisticated set of tools to provide threat detection, behavioral analytics and other methods of detecting attacks quickly and decisively.
The report concludes: “Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyberthreats, whether traditional or AI-enabled.”