Chatbot vs chatbot – researchers train AI chatbots to hack each other, and they can even do it automatically

0
10

Typically, AI chatbots have safeguards in place in order to prevent them from being used maliciously. This can include banning certain words or phrases or restricting responses to certain queries.

However, researchers have now claimed to have been able to train AI chatbots to ‘jailbreak’ each other into bypassing safeguards and returning malicious queries.

LEAVE A REPLY

Please enter your comment!
Please enter your name here