Technology

DeepSeek Ban Raises Questions on AI Security

Published February 5, 2025

The Australian government has decided to ban DeepSeek from all government devices, citing concerns about data security. However, the broader implications of this ban are sparking a debate among experts regarding whether the Chinese AI tool is riskier than US-made large language models like ChatGPT or Gemini.

Public servants have been instructed to remove DeepSeek from their work devices immediately; however, politicians and certain government-owned organizations, such as Australia Post and NBN Co, are exempt from this mandate.

Home Affairs Minister Tony Burke emphasized that the ban reflects a heightened concern over potential risks tied to Chinese connection in data collection. Burke noted that the approach is “country agnostic,” focusing solely on the risks to the Australian government.

Despite these government concerns, many AI experts are less worried about the security threats posed by DeepSeek. According to a survey conducted by Scimex, most of the 20 AI experts indicated that they are more interested in the potential of lower-resource technologies like DeepSeek to democratize AI access rather than their security concerns.

DeepSeek’s appeal lies significantly in its lower hardware costs and the availability of its source code to the public. This contrasts with Enormous models produced by companies such as OpenAI, which are proprietary and considered “unsafe” to share.

Jason Pallant, a senior lecturer in Marketing Technology at RMIT, commented on the implications of DeepSeek’s open-source components. He stated that these aspects present opportunities for companies to develop their own AI systems in ways that were previously unavailable. "DeepSeek is significantly cheaper to interact with, making it an attractive alternative to existing commercial models," he explained.

A growing concern is raised over how sensitive user data may be handled by the Chinese government. However, it appears that the End User License Agreements (EULAs) of both DeepSeek and OpenAI reflect similar practices concerning data collection.

Government ministers have been encouraging the Australian Public Service (APS) to use AI tools based on OpenAI models. Around 25% of respondents to The Mandarin's Frank and Fearless survey indicated they were already utilizing tools like ChatGPT or Copilot.

Saeed Rehman, a cybersecurity lecturer at Flinders University, stated that, from a privacy perspective, DeepSeek shares risks similar to those posed by other AI providers. He pointed out that input data collected will become accessible for training purposes, and DeepSeek gathers extensive user data, including device identification and locations.

Concerns extend beyond usage and security to the viewpoint these models may promote. DeepSeek is unable to accurately address sensitive political issues related to China, such as Taiwan or the Tiananmen Square incident. Conversely, prominent US models have faced criticisms for adopting a Western-centric perspective and even producing inaccurate information.

Samantha Newell, a psychology lecturer at the University of Adelaide, expressed that no AI model is truly neutral. The training data used in generative models significantly impacts the produced narratives, often reflecting biases present in society.

In light of these events, New South Wales has also opted to prohibit the use of DeepSeek within the public service this week. Additionally, there are ongoing investigations regarding DeepSeek's data management practices in the United States, South Korea, and various European countries.

DeepSeek, AI, Ban, Australia, Data