Data poisoning attacks: Sounding the alarm on GenAI’s silent killer

0
10

When researchers at software management company, JFrog, routinely scanned AI/ML models uploaded to Hugging Face earlier this year, the discovery of a hundred malicious models put the spotlight on an underrated category of cybersecurity woes: data poisoning and manipulation.

The problem with data poisoning, which targets the training data used to build Artificial Intelligence(AI)/Machine Learning(ML) models, is that it’s unorthodox as far as cyberattacks go, and in some cases, can be impossible to detect or stop. Attacking AI this way is relatively easy and no hacking in the traditional sense is even required to poison or manipulate training data that popular large language models (LLMs) like ChatGPT rely on.

LEAVE A REPLY

Please enter your comment!
Please enter your name here