It now seems entirely possible that ChatGPT parent company OpenAI has solved the ‘superintelligence’ problem, and is now grappling with the implications for humanity.
In the aftermath of OpenAI’s firing and rehiring of its co-founder and CEO Sam Altman, revelations about what sparked the move keep coming. A new report in The Information pins at least the internal disruption on a significant Generative AI breakthrough that could lead to the development of something called ‘superintelligence’ within this decade or sooner.
Superintelligence is, as you might have guessed, intelligence that outstrips humanity, and the development of AI that’s capable of such intelligence without proper safeguards is, naturally, a major red flag.
According to The Information, the breakthrough was spearheaded by OpenAI Chief Scientist (and full-of-regrets board member) Ilya Sutskever.
It allows AI to use cleaner and computer-generated data to solve problems the AI has never seen before. This means the AI is trained not on many different versions of the same problem, but on information not directly related to the problem. Solving problems in this way – usually math or science problems – requires reasoning. Right, something we do, not AIs.
OpenAI’s primary consumer-facing product, ChatGPT (powered by the GPT large language model [LLM]) may seem so smart that it must to be using reason to craft its responses. Spend enough time with ChatGPT, however, and you soon realize it’s just regurgitating what it’s learned from the vast swaths of data it’s been fed, and making mostly accurate guesses about how to craft sentences that make sense and which apply to your query. There is no reasoning involved here.
The Information claims, though, that this breakthrough – which Altman may have alluded to in a recent conference appearance, saying, “on a personal note, just in the last couple of weeks, I have gotten to be in the room, when we sort of like push the sort of the veil of ignorance back and the frontier of discovery forward,” – sent shockwaves throughout OpenAI.
Managing the threat
While there’s no sign of superintelligence in ChatGPT right now, OpenAI is surely working to integrate some of this power into, at least, some of its premium products, like GPT-4 Turbo and those GPTs chatbot agents (and future ‘intelligent agents’).
Connecting superintelligence to the board’s recent actions, which Sutskever initially supported, might be a stretch. The breakthrough reportedly came months ago, and prompted Sutskever and another OpenAI scientist, Jan Leike, to form a new OpenAI research group called Superaligment with the goal of developing superintelligence safeguards.
Yes, you heard that right. The company working on developing superintelligence is simultaneously building tools to protect us from superintelligence. Imagine Doctor Frankenstein equipping the villagers with flamethrowers, and you get the idea.
What’s not clear from the report is how internal concerns about the rapid development of superintelligence possibly triggered the Altman firing. Perhaps it doesn’t matter.
At this writing, Altman is on his way back to OpenAI, the board is refashioned, and the work to build superintelligence – and to protect us from it – will continue.
If all of this is confusing, I suggest you ask ChatGPT to explain it to you.