OpenAI, the company behind the famed Chat-GPT generative Artificial Intelligence (AI) solution, says it has recently blocked multiple malicious campaigns abusing its services.
In a report, the company said it blocked more than 20 operations and deceptive networks around the world in 2024 so far.
These operations varied in nature, size, and targets. Sometimes, the crooks would use it to debug malware, and sometimes they would use it to write content (website articles, fake biographies for social media accounts, fake profile pictures, etc.).
Disrupting the disruptors
While this sounds sinister and dangerous, OpenAI says the threat actors failed to gain any significant traction with these campaigns:
“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” it said.
But 2024 is election year – not just in the States, but elsewhere around the world – and OpenAI has seen ChatGPT abused by threat actors trying to influence pre-election campaigns. It mentioned multiple groups, including one called “Zero Zeno. This Israeli-based commercial company “briefly” generated social media comments about elections in India – a campaign that was disrupted “less than 24 hours after it began.”
The company added in June 2024, just before the elections for the European Parliament, it disrupted an operation dubbed “A2Z”, which focused on Azerbaijan and its neighbors. Other notable mentions included generating comments about the European Parliament elections in France, and politics in Italy, Poland, Germany, and the US.
Luckily, none of these campaigns made any significant progress, and once OpenAI banned them, they were stopped entirely:
“The majority of social media posts that we identified as being generated from our models received few or no likes, shares, or comments, although we identified some occasions when real people replied to its posts,” OpenAI concluded. “After we blocked its access to our models, this operation’s social media accounts that we had identified stopped posting throughout the election periods in the EU, UK and France.”