OpenAI, the creator of ChatGPT, has recently published new guidelines for evaluating the potential catastrophic risks posed by artificial intelligence (AI) in current models under development. This move comes just a month after OpenAI’s board terminated CEO Sam Altman, only to rehire him shortly after facing backlash from staff and investors.
According to reports, Altman was criticized by board members for prioritizing the accelerated development of OpenAI, even at the expense of overlooking potential risks associated with its technology.
In their recently released Preparedness Framework, OpenAI acknowledges the insufficiency of scientific research on catastrophic risks stemming from AI and aims to bridge this gap through their framework.
The framework establishes a monitoring and evaluation team, announced in October, which will focus on the assessment of frontier models that exhibit superior capabilities compared to existing AI software. The team will evaluate each new model and assign it a risk level ranging from low to critical in four primary categories.
Under the framework, only models with a risk score of medium or below will be authorized for deployment. The first category pertains to cybersecurity, assessing the model’s potential to carry out large-scale cyberattacks.
The second category evaluates the software’s potential to contribute to the creation of harmful agents such as chemical mixtures, organisms (like viruses), or even nuclear weapons.
The third category focuses on the persuasive power of the model, evaluating its capacity to influence human behavior.
The final risk category revolves around the model’s autonomy, specifically whether it can escape the control of its creators.
Once identified, the risks will be presented to OpenAI’s Safety Advisory Group, a newly formed body responsible for recommending actions to Altman or a designated representative. The CEO of OpenAI will ultimately decide on any necessary modifications to the model in order to mitigate associated risks.
The board of directors will be kept informed throughout the process and has the authority to challenge any management decisions if deemed necessary.
By releasing these guidelines, OpenAI aims to address concerns related to the potential harmful impacts of AI and establish a systematic approach for evaluating and managing the risks associated with their models.
Keeping in mind the evolving landscape of AI technology, OpenAI aims to strike a balance between advancing AI capabilities and ensuring appropriate risk management protocols are in place. These guidelines mark an important step towards responsible and ethical AI development.
*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it
Ravina Pandya, Content Writer, has a strong foothold in the market research industry. She specializes in writing well-researched articles from different industries, including food and beverages, information and technology, healthcare, chemical and materials, etc. With an MBA in E-commerce, she has an expertise in SEO-optimized content that resonates with industry professionals.