Russian cyber criminals have been spotted trying to bypass ChatGPT’s restrictions and use the advanced AI-powered chatbot for their nefarious purposes.
Check Point Research (CPR) said they discovered several discussions on underground forums where hackers discussed various methods, including using stolen payment cards to pay for updated user accounts on OpenAI, bypassing geofencing restrictions, and using a “semi-legal Russian online SMS service”. to register ChatGPT.
ChatGPT is a new artificial intelligence (AI) chatbot that has been making big headlines for its versatility and ease of use. Cybersecurity researchers have already seen hackers use the tool to generate credible phishing emails as well as code for malicious, macro-laden Office files.
Paper roadblocks
However, it is not that easy to abuse the tool as OpenAI imposes a number of limitations. Russian hackers face even more obstacles due to the invasion of Ukraine.
For Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software Technologies, the roadblocks aren’t good enough:
“It is not very difficult to circumvent OpenAI’s restrictive measures for accessing ChatGPT for specific countries. At the moment we see how Russian hackers are already discussing and checking how to bypass geofencing to use ChatGPT for their malicious purposes.
We believe that these hackers are most likely trying to implement and test ChatGPT in their daily criminal operations. Cybercriminals are becoming more interested in ChatGPT because the AI technology behind it can make a hacker more cost-effective,” Shykevich said.
But hackers aren’t just trying to use ChatGPT — they’re also trying to capitalize on the tool’s rising popularity to spread all kinds of messages malware (opens in new tab) and steal money. For example, Apple’s mobile app repository, the App Store, hosted an app pretending to be the chatbot, but with a monthly subscription that cost around $10. Other apps (some of which were also found on Google Play) charged up to $15 for the “service”.