Technology

OpenAI Unveils AI Safety Framework

Published December 18, 2023

In response to growing concerns over artificial intelligence, OpenAI has announced a new safety strategy aimed at mitigating risks associated with its cutting-edge AI technologies. This comprehensive framework grants the company's board the power to overrule safety-related decisions made by executives if deemed necessary.

Commitment to Safe Deployment

Backed by Microsoft, OpenAI insists that its latest advancements will be deployed only after they have been confirmed to be safe, particularly in sensitive domains such as cybersecurity and evelopment of potential nuclear threats. To bolster this commitment, OpenAI is establishing a specialized advisory group. This team will be responsible for examining safety assessments and furnishing their insights to both executives and the board of the company.

Addressing AI's Dual Nature

The dual nature of AI technology, capable of both enthralling with creative outputs and raising alarm with the risk of disseminating false information or influencing human behavior, has been thrust into the limelight since the debut of ChatGPT a year ago. These concerns have resonated with experts in the AI field and the general populace alike.

Pausing AI Developments?

Earlier in April, leading figures in the AI industry alongside prominent experts publicly advocated for a six-month cessation on the progress of AI systems surpassing the capabilities of OpenAI's GPT-4, highlighting the potential hazards such advancements could pose to society. Further epitomizing the wary stance towards AI, a poll conducted in May by Reuters and Ipsos brought to light the fact that a significant majority of Americans—over two-thirds—harbor concerns over adverse effects of AI, with 61% apprehensive that AI technologies could one day endanger civilization itself.

OpenAI, Safety, AI