Technology

AI Safety Recommendations from Fei-Fei Li's Policy Group

Published March 19, 2025

A recent report from a policy group in California, co-led by AI expert Fei-Fei Li, emphasizes the importance of addressing future risks associated with artificial intelligence (AI) in regulatory frameworks. This 41-page interim report was released on Tuesday by the Joint California Policy Working Group on Frontier AI Models, which was established after Governor Gavin Newsom vetoed California's AI safety legislation, SB 1047.

Although Newsom saw flaws in SB 1047, he recognized that a more thorough evaluation of AI risks was essential for lawmakers. In this report, Fei-Fei Li and her collaborators, including UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, call for regulations that promote transparency regarding the activities of leading AI laboratories like OpenAI.

The authors suggest that the unique risks posed by AI technologies may require laws mandating developers to disclose their safety testing processes, how they collect data, and their security protocols. They also advocate for enhanced standards for third-party evaluations of these practices, alongside stronger protections for whistleblowers within the AI industry.

The report discusses the uncertainties around AI's potential to facilitate cyberattacks, create biological weapons, or lead to other extreme threats. It stresses that AI regulations should not just deal with current dangers but also anticipate future challenges that might emerge without proper precautions.

For instance, the report points out that we don't need to see a nuclear weapon explode to understand the potential destruction it could cause. The authors highlight the significant risks of inaction regarding frontier AI, stating that, "If those who speculate about the most extreme risks are right—and we are uncertain if they will be—then the stakes and costs for inaction at this current moment are extremely high."

One of the report's main recommendations is to pursue a dual approach of fostering trust while ensuring verification. This means providing channels for AI developers and their teams to report public concerns about safety while also requiring them to have their testing results independently verified.

While the report does not endorse specific legislative measures, it has been generally well-received by experts from various sides of the AI regulation discussion. For example, Dean Ball from George Mason University, who had expressed reservations about SB 1047, viewed the report as a promising advancement for AI safety regulations in California. Similarly, California State Senator Scott Wiener, who proposed SB 1047, stated in a press release that the report supports urgent discussions on AI governance initiated in the legislature in 2024.

The findings of this interim report seem to echo various elements of both SB 1047 and Senator Wiener's subsequent proposal, SB 53, which includes requirements for AI developers to share safety test results. This report could be seen as a crucial development for AI safety proponents, especially at a time when their interests have faced challenges over the past year.

AI, safety, regulation