OpenAI, the parent organization of ChatGPT, appears to be on the brink of a monumental achievement – solving the ‘superintelligence’ conundrum. However, this potential breakthrough brings with it profound implications for the future of humanity.
The abrupt dismissal and subsequent reappointment of OpenAI’s co-founder and CEO Sam Altman has been a topic of much discussion in the tech world. Fresh insights into the catalyst for this move continue to emerge, with a report by The Information attributing the internal turmoil to a significant advancement in Generative AI. This development could potentially pave the way for the emergence of ‘superintelligence’ within this decade or even sooner.
‘Superintelligence’, as the term suggests, refers to an intelligence that surpasses human capabilities. Naturally, the creation of an AI system possessing such intelligence without adequate safeguards raises serious concerns.
OpenAI Chief Scientist Ilya Sutskever, who also serves on the board, is said to have led this groundbreaking initiative. It enables AI to utilize cleaner, computer-generated data to tackle unprecedented problems. This implies that the AI is not trained on multiple iterations of the same problem, but on information that isn’t directly related to the problem at hand. Solving problems in this manner, typically mathematical or scientific ones, demands reasoning – a human trait not typically associated with AI.
ChatGPT, OpenAI’s flagship product powered by the GPT large language model (LLM), might give the impression of employing reasoning to formulate its responses given its apparent intelligence. However, upon closer interaction with ChatGPT, it becomes evident that it merely parrots what it has learned from the enormous quantities of data it has processed, making mostly accurate predictions about constructing coherent sentences relevant to your query. Reasoning doesn’t come into play here.
The Information suggests that this breakthrough, which Altman may have hinted at in a recent conference, stating, “on a personal note, just in the last couple of weeks, I have gotten to be in the room, when we sort of like push the sort of the veil of ignorance back and the frontier of discovery forward,” has sent shockwaves through OpenAI.
Handling the Threat
While ChatGPT currently shows no signs of superintelligence, it’s likely that OpenAI is making strides to incorporate some of this power into its premium products, such as GPT-4 Turbo and future ‘intelligent agents’.
Drawing a connection between the recent board actions, initially supported by Sutskever, and the superintelligence breakthrough might seem far-fetched. The reported breakthrough occurred months ago and led Sutskever and another OpenAI scientist, Jan Leike, to establish a new OpenAI research group named ‘Superalignment’, tasked with devising safeguards for superintelligence.
In an interesting twist, the very company pioneering the development of superintelligence is simultaneously constructing mechanisms to shield us from it, reminiscent of Doctor Frankenstein arming villagers with flamethrowers.
What remains ambiguous from the report is whether internal apprehensions surrounding the swift progression towards superintelligence directly led to Altman’s dismissal. Perhaps, in the grand scheme of things, it’s irrelevant.
As it stands, Altman is en route to resume his position at OpenAI, the board has been restructured, and the quest to develop – and guard against – superintelligence persists.
For those finding these developments baffling, engaging with ChatGPT for an explanation could prove enlightening.