News

OpenAI's Safety Concerns Questioned by Former Researcher

Published March 7, 2025

A recent critique of OpenAI has emerged, with an expert expressing concerns over the company's approach to safety. Miles Brundage, a former policy researcher at OpenAI, challenged the organization's recent safety and alignment document released this week.

The document outlines OpenAI's ambition to develop artificial general intelligence (AGI) through incremental steps, rather than a single significant breakthrough. It advocates for an iterative deployment approach, which is intended to better identify safety issues and potential misuse of AI throughout its development.

However, many experts in the AI industry are cautious about the reliability of technologies like ChatGPT. Concerns have arisen over the capabilities of chatbots to provide misleading information, particularly about critical subjects such as health and safety. These issues are reminiscent of previous missteps, such as Google's AI search feature that suggested dangerous actions like eating rocks. Moreover, there are fears that AI may be exploited for political manipulation, spreading misinformation, or scamming.

OpenAI has faced scrutiny for its lack of transparency regarding how it develops its AI systems. Critics claim that models can inadvertently contain sensitive personal data, which raises serious ethical commitments in AI development. The newly released document appears intended to address these mounting concerns regarding safety.

Interestingly, the OpenAI document reinterprets the history surrounding the release of its GPT-2 model, portraying it as a previously halted initiative due to worries over malicious use. In contrast, Brundage argues that this narrative glosses over important details and inaccurately reflects the history of AI development at OpenAI. He states, "OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment. The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution."

Brundage raises alarm about OpenAI's stance on risk outlined in its document. He asserts that the current attitude seems to create a standard where only overwhelming evidence of immediate danger warrants precaution. He warns, "That is a very dangerous mentality for advanced AI systems." This perspective has come under increasing scrutiny, especially as OpenAI faces accusations of emphasizing new and appealing products over safety protocols.

As OpenAI navigates its path forward, the conversation surrounding its safety philosophy and history will continue to be closely watched by experts and the public alike.

OpenAI, safety, concerns