Technology

Concerns Over AI Security Lead to Blocking of DeepSeek in South Korea

Published February 12, 2025

The recent launch of DeepSeek, a new AI chatbot from China, has raised significant security concerns in South Korea. The app, released on January 20, has shocked global markets as tech companies scramble to understand how it was developed so rapidly. Initially met with curiosity, the response to DeepSeek has shifted towards caution and skepticism, with fears that it may pose hidden security risks.

In response to these worries, various governments, including that of the United States, are taking steps to restrict access to DeepSeek. Countries like Australia, Italy, and Taiwan have also begun implementing controls on the chatbot technology. Following suit, South Korea has joined the movement, with both government agencies and large corporations blocking access to DeepSeek.

Recently, the South Korean government advised its ministries and state organizations to remain vigilant regarding the use of DeepSeek. Reports indicate that the Foreign and Defense Ministries have blocked access to the app, and major Korean firms have issued warnings to their employees about using it at work.

One primary concern is that DeepSeek appears to collect an excessive amount of sensitive information. Unlike traditional AI systems that typically gather basic user data, DeepSeek reportedly tracks users' keyboard patterns. This raises alarms about potential data privacy violations.

An even greater issue is that the Chinese company behind DeepSeek stores vast amounts of user data on servers located in China. Under Chinese law, all companies are required to assist in government intelligence efforts. This means that, theoretically, the Chinese government could gain access to sensitive data relating to DeepSeek users.

South Korean authorities are particularly worried about DeepSeek's rapid growth in user numbers, which have reportedly surpassed 1.2 million within a few weeks of its launch. Without effective regulations, tensions surrounding the security implications of Chinese apps are likely to escalate further.

This is not the first time that Chinese firms have faced scrutiny over security issues in South Korea. Last year, Korea's fair trade watchdog had to intervene with Chinese e-commerce companies, compelling them to revise their terms that allowed broad access to user data.

Moreover, other Chinese tech products, such as electric vehicles with data-tracking capabilities, are also viewed with suspicion due to their potential for data leaks. Policymakers are struggling to keep pace with the rapid evolution of technology and the corresponding need for robust data protection policies.

In addition to concerns about Chinese tech, there is a larger issue regarding the rapid advancement of AI. Regulations often lag behind this fast-moving sector, and merely blocking access to chatbots at work does not address the potential for individuals to input sensitive information while using the technology at home. The AI landscape is largely built on open-source platforms, encouraging collaboration among users and companies to further advance technology. This poses a challenge for regulators needing to balance restrictions with incentives to foster a healthy AI ecosystem.

It is also crucial to remember that security risks are not exclusive to Chinese AI. Prominent generative AI products like ChatGPT from companies such as Google gather extensive data from users globally. Likewise, South Korean tech companies are also collecting significant user data as they strive to develop their own AI solutions.

The urgency for stronger AI-related security measures grows as advancements in AI technology continue to unfold. Yet, South Korea currently lacks specific regulations aimed at safeguarding private data in relation to AI models. In anticipation of more disruptive technologies like DeepSeek, it is essential for policymakers to create a more comprehensive framework for AI governance.

AI, Security, Technology