Protests Against AI Development Demand Global Safety Measures
This weekend, protesters are set to take to the streets in Australia to voice their concerns over the rapid development of artificial intelligence (AI). They aim to convince the government to pause advancements in what many are calling potentially 'the most dangerous technology ever created.'
A global protest initiative known as PauseAI is organizing rallies in cities such as Melbourne, ahead of the upcoming Artificial Intelligence Action Summit in Paris. Protesters argue that the summit does not adequately address the pressing issue of AI safety.
Countries like China and the United States are currently in a fierce competition to advance their AI technologies. Millions of Australians are using the Chinese AI application DeepSeek, while the US is investing approximately $500 billion through a collaborative project called Stargate to enhance its own AI capabilities.
DeepSeek, an open-source AI platform from China, has been developed at a fraction of the cost and time compared to major US technology firms. Credit: Bloomberg
Joep Meindertsma, the founder of PauseAI, expressed concerns that companies like OpenAI and DeepSeek are not implementing enough safety precautions for their AI systems before they are released. Protesters are calling for an international treaty to pause the development of AI systems that exceed the capabilities of GPT-4, demanding the establishment of frameworks to ensure these systems are developed safely and inclusively.
Meindertsma stated, "It’s not a secret anymore that AI could be the most dangerous technology ever created. We need our leaders to act on the small chance that things can go very wrong, very soon." He emphasized how invisible and abstract threats often fail to alarm the public.
The urgency of the protest stems from the belief that AI technology is evolving faster than society can manage, especially since experts still struggle to understand the workings of AI systems like ChatGPT. In a related move, Google recently updated its AI ethics guidelines and no longer prohibits military or surveillance applications.
As leading AI researchers such as Geoffrey Hinton and Yoshua Bengio have warned about the potential for AI to lead to human extinction, the push for safety has gained even more significance.
Federal Minister for Industry and Science Ed Husic announced a National AI Capability Plan in December, which is expected to be finalized by the end of 2025. However, Meindertsma believes international cooperation is crucial, arguing that relying solely on individual nations is insufficient to ensure safety. He stated that meaningful international regulations can only be achieved if the summit addresses safety concerns seriously.
Next week’s summit in Paris follows previous conferences held in Bletchley, England and Seoul, South Korea, both of which aimed to create frameworks for AI safety. While Minister Husic will not attend, senior officials from the Australian government will represent the country at the summit.
A protest is set to take place at the State Library in Melbourne at 2 PM on Saturday. Early reports indicate a dozen supporters have registered, with hopes for greater attendance on the day of the event.
Supporter Michael Huang explained the goal of the protest: “We want the Australian government to engage more in these international negotiations, and we hope to bring this topic into mainstream policies rather than leaving it to tech companies to shape the future.”
He also warned, "AI systems could be used to develop new pharmaceuticals or even biological weapons. It’s essential we establish global regulations and explore whether we can make technology safe and, if not, impose a global moratorium."
Co-organizer Mark Brown highlighted the significance of having a plan in place: “If you develop a system that becomes smarter than humanity and you lack a strategy, it’s a serious issue."
AI, protests, safety