Since many people in the West have suddenly panicked about the possibility that the DeepSeek app is sending user information somewhere, I would like to offer a brief explanation. In 2023, China enacted the Provisional Measures on the Management of Generative Artificial Intelligence Services (生成式人工智能服务管理暂行办法), which serves as the regulatory guidance for the generative AI services industry.
[See: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm]

This law contains various commendable provisions, such as: respecting the intellectual property rights of third parties and refraining from infringing them; handling personal information in accordance with individuals’ requirements; prohibiting discrimination based on ethnicity or ideology; improving transparency (including for training data); increasing the availability of high-quality training data; and promoting collaboration and sharing to build a sound AI ecosystem. In my view, it is very characteristic of China that much of the content focuses on scientific rationality.
However, there are naturally some problematic aspects as well.
Article 4 states (Chinese text in brackets, followed by an English translation):
「第四条 提供和使用生成式人工智能服务,应当遵守法律、行政法规,尊重社会公德和伦理道德,遵守以下规定:
(一)坚持社会主义核心价值观,不得生成煽动颠覆国家政权、推翻社会主义制度,危害国家安全和利益、损害国家形象,煽动分裂国家、破坏国家统一和社会稳定,宣扬恐怖主义、极端主义,宣扬民族仇恨、民族歧视,暴力、淫秽色情,以及虚假有害信息等法律、行政法规禁止的内容;
English rendering:
“Article 4: When providing or using generative AI services, one must comply with laws and administrative regulations, respect social morality and ethics, and abide by the following provisions:
(1) Adhere to the core socialist values. It is prohibited to generate content that incites subversion of state power or overthrows the socialist system, endangers national security and interests, damages the country’s image, incites division of the country, undermines national unity and social stability, promotes terrorism or extremism, propagates ethnic hatred or ethnic discrimination, violence, obscenity or pornography, or includes false and harmful information prohibited by laws and administrative regulations.”
The term “core socialist values” refers to the twelve values that the state, society, and individuals are expected to uphold. Within China’s political framework, any content deemed inconsistent with those values is eliminated as a matter of principle. This is the rationale for embedding biases into AI models.
Article 11 and Article 14 state:
第十一条 提供者对使用者的输入信息和使用记录应当依法履行保护义务,不得收集非必要个人信息,不得非法留存能够识别使用者身份的输入信息和使用记录,不得非法向他人提供使用者的输入信息和使用记录。
第十四条 提供者发现违法内容的,应当及时采取停止生成、停止传输、消除等处置措施,采取模型优化训练等措施进行整改,并向有关主管部门报告。
English rendering:
“Article 11: Providers shall, in accordance with the law, fulfill their obligations to protect users’ input information and usage records. They shall not collect unnecessary personal information, shall not illegally retain input information and usage records that could identify users, and shall not unlawfully provide users’ input information and usage records to others.
Article 14: If providers discover illegal content, they shall promptly take measures such as halting its generation, stopping its transmission, or deleting it. They shall also adopt measures such as model optimization training to remedy the issue, and report to the relevant supervising authorities.”
Under this legal framework, in order to uphold the “core socialist values” required by Article 4, Article 11 mandates that providers record all user activity, and Article 14 justifies censorship based on those records. Taken together, these provisions effectively legalize the monitoring of user inputs and outputs in generative AI applications and web services. Put another way, if the DeepSeek app is sending various information back to China, it is simply complying with Chinese domestic law. The same applies to all other AI services operating within China.
Article 6 states:
第六条 鼓励生成式人工智能算法、框架、芯片及配套软件平台等基础技术的自主创新,平等互利开展国际交流与合作,参与生成式人工智能相关国际规则制定。
English rendering:
“Article 6: Encourage independent innovation in fundamental technologies such as generative AI algorithms, frameworks, chips, and supporting software platforms; engage in international exchange and cooperation on the basis of equality and mutual benefit; and participate in the formulation of international rules related to generative AI.”
There is no doubt that one must exercise caution when dealing with Chinese AI services. At the same time, it is interesting to note that these censorship requirements are effectively imposed only on generative AI services and applications. When it comes to the act of developing AI models, Article 6 explicitly stipulates that such activities should be promoted on the basis of international exchange and adherence to international rules. This posture is what has led to the proliferation of AI models under Open Source licenses. Distributing AI models under licenses such as MIT or Apache 2.0 aligns with China’s national strategy in this regard.
As you can see, China’s AI regulations have two seemingly contradictory aspects: on the one hand, they establish a strong censorship system, and on the other hand, they emphasize international cooperation for technological development. This reflects China’s strategic approach to striking a balance between technological development and security.