OpenAI Pioneers AI Risk Mitigation with New ‘Preparedness’ Division

Oct 28, 2023

OpenAI, the trailblazing organization behind the creation of ChatGPT, is stepping up its commitment to artificial intelligence (AI) safety. The firm recently unveiled a new initiative dedicated to proactively addressing the full spectrum of potential risks associated with AI.

Dubbed Preparedness, this novel division was announced on October 25 and marks a significant advance in OpenAI‘s ongoing efforts to ensure the safe deployment of AI. The team will concentrate on identifying and mitigating potential AI threats spanning chemical, biological, radiological, and nuclear domains, as well as risks related to individualized persuasion, cybersecurity, and autonomous replication and adaptation.

The Preparedness team, under the leadership of Aleksander Madry, will grapple with critical questions such as the potential misuse of frontier AI systems and the threat posed by malicious actors deploying stolen AI model weights.

OpenAI acknowledges the dual-edged nature of AI technology. Frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” the firm writes. However, it also concedes that these models carry “increasingly severe risks.”

In order to navigate this complex landscape, OpenAI is committed to developing a robust approach to catastrophic risk preparedness. This commitment extends to their recruitment strategy, with OpenAI actively seeking individuals with diverse technical skills to join their Preparedness team. The company has also launched an AI Preparedness Challenge aimed at preventing catastrophic misuse, offering $25,000 in API credits to the top 10 submissions.

The announcement of the Preparedness team follows OpenAI’s July 2023 statement about its plans to establish a team dedicated to tackling potential AI threats.

As AI continues to evolve, concerns around its potential risks have grown. Some fear that AI could surpass human intelligence, leading to unforeseen consequences. Despite these concerns, companies like OpenAI continue to innovate in the field of AI, which has sparked further debate.

In May 2023, the nonprofit organization Center for AI Safety issued an open letter on AI risk, urging the global community to prioritize mitigating the risks of extinction from AI alongside other societal-scale risks such as pandemics and nuclear war.