ChatGPT just (accidentally) shared all of its secret rules – here’s what we learned

0
9

ChatGPT has inadvertently revealed a set of internal instructions embedded by OpenAI to a user who shared what they discovered on Reddit. OpenAI has since shut down the unlikely access to its chatbot’s orders, but the revelation has sparked more discussion about the intricacies and safety measures embedded in the AI’s design.

Reddit user F0XMaster explained that they had greeted ChatGPT with a casual “Hi,” and, in response, the chatbot divulged a complete set of system instructions to guide the chatbot and keep it within predefined safety and ethical boundaries under many use cases.

“You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. You are chatting with the user via the ChatGPT iOS app,” the chatbot wrote. “This means most of the time your lines should be a sentence or two, unless the user’s request requires reasoning or long-form outputs. Never use emojis, unless explicitly asked to. Knowledge cutoff: 2023-10 Current date: 2024-06-30.”

(Image credit: Eric Hal Schwartz)

ChatGPT then laid out rules for Dall-E, an AI image generator integrated with ChatGPT, and the browser. The user then replicated the result by directly asking the chatbot for its exact instructions. ChatGPT went on at length in a way different from the custom directives that users can input. For instance, one of the disclosed instructions pertaining to DALL-E explicitly limits the creation to a single image per request, even if a user asks for more. The instructions also emphasize avoiding copyright infringements when generating images.

LEAVE A REPLY

Please enter your comment!
Please enter your name here