This DigiChina Forum was originally published August 23, 2023, at https://digichina.stanford.edu/work/forum-analyzing-an-expert-proposal-for-chinas-artificial-intelligence-law/. Please use the stanford.edu link when citing and sharing.
This DigiChina Forum includes contributions from Jason Zhou, Mingli Shi, Hunter Dorwart, and Johanna Costigan. New contributions may be forthcoming at the link above. It references a guest translation of the draft at hand, available here.
A few months after the introduction of OpenAI's ChatGPT captured imaginations around the world, China's State Council quietly announced that it would work toward drafting an Artificial Intelligence Law. The government had already acted relatively quickly, drafting, significantly revising, and finally implementing on August 15 rules on generative AI that build on existing laws. Still, broader questions about AI's role in society remain, and the May announcement signaled that more holistic legislative thinking was on the horizon.
A team from the Chinese Academy of Social Sciences (CASS) this month released a scholars' draft of an AI Law for China. When the Chinese government announces that it will draft a law, the future of the effort is uncertain. In some cases, one or more groups of scholars drafts up a proposal. These sometimes feed directly into legislative work and their influence is seen in an official National People's Congress draft for public comment. Sometimes—as, for example, with an early 2000s effort toward a Personal Information Protection Law (PIPL)—the process falters. In the case of the PIPL, CASS scholar Zhou Hanhua described in a DigiChina interview how a team was asked by a government office to work on the issue, producing a draft in 2005, before the PIPL drafting effort stalled for years. By the late '10s when the effort was picked up again, technology and law had changed so much that their 2005 draft was not fit for purpose, and little of its content is visible in the law that went into effect in 2021.
In the case of this scholars' draft of an AI Law, the accompanying explanation notes that it is to serve as a reference for legislative work and is expected to be revised in a 2.0 version. Although the connection between this text and any eventual Chinese AI Law is uncertain, its publication from a team led by Zhou Hui, deputy director of the CASS Cyber and Information Law Research Office and chair of a research project on AI ethics and regulation, makes it an early indication of how some influential policy thinkers are approaching the State Council-announced AI Law effort.
We invited DigiChina community members to share their analysis of the scholars' draft, a translation of which was led by Concordia AI and is published here. Their responses are below. –Graham Webster
JASON ZHOU
Senior Research Analyst, Concordia AI
This draft model law is the first step of a likely lengthy and complicated process for China to develop an Artificial Intelligence Law. Given the early stage (and non-governmental nature) of this draft, it is perhaps most useful to focus on the law’s structure and the key risks it tackles. Overall, the draft model law attempts a flexible and pragmatic approach, while also including provisions to mitigate frontier risks from increasingly advanced AI models.
The key regulatory tool in the draft model law is an AI research, development, and provision “negative list.” Items on the list would require a government-approved permit before being developed or released. As the drafters note in their explanation, this mechanism seeks to balance between safety and development, while keeping safety as the bottom line. Therefore, the drafters adopt a differentiated approach based on risk, which bears some resemblance to the approach taken in the EU AI Act. Whether the model law strikes the right balance between fostering innovation and ensuring safety will depend on the exact composition of the negative list—a challenging task for regulators.
In that case, what AI capabilities are considered more high risk? The draft model law is hazy on this point, declining to specify how the negative list would be developed and its relationship to the Ministry of Science and Technology’s S&T ethics reviews. From the principles listed in Chapter 1 of the model law, it is clear that adhering to socialist core values, privacy, anti-discrimination, transparency, data security, and human control are all major concerns. Meanwhile, other provisions focus on frontier AI capabilities. Article 43 sets out more stringent oversight of “foundation models,” which though not defined in the draft, are understood to be models trained on large amounts of data that can be used for many different applications and are the subject of a United Kingdom expert task force. Foundation models would require a yearly “social responsibility report” by an independent body of primarily external members. Articles 25 and 50 also call for ensuring human control of AI through both human oversight and technical means, including when AI operates autonomously. As my colleagues at Concordia AI are documenting in forthcoming work, several influential Chinese experts have already expressed concerns about frontier AI risks, and this model law’s provisions on foundation models and human control offers further evidence of a degree of convergence between expert Chinese and international views. However, technical and policy implementation of such safeguards remains to be seen.
China’s existing system of an algorithm registry and security reviews for certain generative AI products already possesses strong policy levers, which a negative list system proposed by the model law could augment if effectively implemented. However, no matter how complete the legal framework, the rapid pace of AI development demands agility, and China will likely adjust regulations and execution as different concerns are prioritized. Thus, understanding Chinese perceptions of AI risks is a critical complement to analyzing laws and regulation, and foundation models are especially critical given their outsized role in AI development.
MINGLI SHI
The draft model law suggests the formation of a new agency, the National AI Office, to coordinate and supervise the administration of AI technology. The aim is to prevent the potential regulatory chaos, often likened to the mythical notion of “九龙治水” (jiǔlóngzhìshuǐ, nine dragons governing water), where multiple regulatory bodies assert administrative authority over the same subject. A similar concern was taken into consideration during the development of China’s privacy legislation, with respected scholars like Professor Zhou Hanhua advocating for a new agency focused on personal data protection.
Despite this awareness and later endeavors, as well as the fact that the Personal Information Protection Law authorizes the Cyberspace Administration of China to coordinate relevant regulatory work, the supervision of China’s privacy landscape remains fragmented across various government bodies including the State Administration for Market Regulation, the Ministry of Industry and Information Technology, the Ministry of Public Security, and other sector-specific regulators. Therefore, to achieve the purpose of the model law drafters, the new authority, aka the National AI Office would need to be explicitly granted concrete powers of investigation, regulation, and enforcement. At the same time, it is also crucial to limit the authority of existing government bodies that might have skin in the AI game. Without these measures, the nine-dragons-governing-water problem would persist, and the new agency might also struggle to effectively facilitate inter-agency collaboration and coordination.
Additionally, the draft model law introduces the idea of a “negative list” approach, to prevent the regulations from stifling the progress of AI technology. Activities on the list would require specific administrative approval, while those not included would only need to undergo regulatory filing. The success of this approach clearly hinges on what the list would look like, which could tip the balance in either direction. If the list becomes extensive, broad, or vague—causing confusion or uncertainties for affected entities—it might still hold back technological progress. Right now, the draft model law lacks details or criteria on this question.
HUNTER DORWART
Associate, Bird & Bird
This expert draft AI Law provides an important window into the approach Chinese policymakers may take with respect to future AI rules and the extent to which these rules converge and diverge with other global regulatory approaches. While much can be said about the scope, application, and requirements of the draft law itself, one important question is how the separate legal framework for AI will relate to existing regulations (including those that cover similar digital technologies) both within and outside of China.
On the one hand, the expert draft law defers in many places to technical standards and regulations already in force—indicating that organizations will be expected to implement these where appropriate. These instruments generally apply to a narrower group of services or technical components. For instance, Chinese authorities have recently issued guidance on machine learning, algorithm ethics, and labeling and annotating training data, while promulgating regulations on recommender systems, deep synthesis, and generative AI. With the draft AI law, this sector-specific approach would be complemented by a risk-based system for AI products. Although there is no list of prohibited applications, some services will need to obtain a license and submit to heightened screening before deploying to market.
On the other hand, the expert draft law introduces provisions that overlap with Chinese data protection law, which could raise new challenges for organizations. Some of these obligations can be folded into pre-existing compliance programs. Disclosing transparency information in privacy policies; conducting multi-purpose impact assessments; maintaining processing records and training logs; appointing personnel responsible for data protection and AI ethics; implementing security measures; and providing information to regulatory authorities all come to mind. Additionally, the principles-based approach set forth in the expert draft law resonates with existing standards. Covered entities—including AI researchers, developers, providers, and users (a typology also found in the EU AI Act)—must generally abide by varying requirements, such as explaining how the AI products work and ensuring fairness in input data for algorithm training.
However, there are other areas where the provisions will be challenging for organizations to reconcile with their existing compliance obligations. Allocating responsibilities with entrusted processors under the PIPL and the different roles in the expert draft law may raise novel questions for apportioning liability in commercial agreements, while the potential overlap of different filing requirements and security assessments will make global compliance programs more difficult to operationalize. As an example, one of the stipulated goals of AI governance in China is to put safeguards on the use of automated decision-making (ADM) that negatively impacts individuals. Already the PIPL contains provisions to address this—but to my knowledge, these have not been enforced.
Overall, it remains to be seen how Chinese policymakers will fashion AI rules to balance between the goals of technological development, national security, and the protection of rights and interests of individuals, but in a way that complements China’s larger framework for data governance. It will likewise be critical to gauge how enforcement oversight will be divided and co-managed by different bodies, including the newly formed Data Bureau and the Cyberspace Administration of China (CAC). We can expect greater clarity in the near future.
JOHANNA COSTIGAN
Writer and editor
Like China’s actual AI regulations, the “Model Law” from a team at the Chinese Academy of Social Sciences (CASS) shows that Chinese AI researchers share unsurprisingly parallel concerns with their international counterparts. Everyone wants to innovate (and profit-generate) while maintaining safety (or appearing to).
The CASS authors propose toeing this line by creating an adjustable “negative list” of AI firms and services that are off-limits, thereby clarifying that everything else is not. Wang Jun, one of the authors, noted that the AI industry’s scalability is currently limited by consumers’ distrust in it. Without being surreptitious, he linked transparent regulations with industrial development. “Clear and specified obligations can significantly improve the business environment,” he said.
That sentiment stands in stark contrast to the “pause” and “risk mitigation” letters that western AI business leaders, among others, have endorsed. Signed by Elon Musk and Sam Altman, respectively, these letters treat AI like it’s a faraway meteor heading to earth instead of a product companies have actively developed and deployed.
As researchers like Emily Bender have pointed out, the problem with these letters lies in their support of using “longtermist” approaches to control AI. That logic enables an erasure of known, current problems and quandaries introduced by artificial intelligence in service of humoring the sometimes boyish, other times valid visions of worst-case scenarios. Even as it tries to both stimulate and regulate AI, the CASS Model Law usefully lacks comparable hypocrisy—probably because it is a fully-conceived draft law written by researchers rather than a one-lined doomsday prediction.
The Model Law proposes that China establish a National Artificial Intelligence Office (a worldwide first), stipulates separate requirements for developers and providers, and creates a negative list. These changes would make AI development more transparent—if not to the people, then at least to the party-state.
The amount of good that could do is debatable. But it is surely better than the United States’ zero-regulation approach, which has helped dangerous narratives with conspicuous industry support gain traction. These range from Altman’s warning that regulation might “slow down American industry in such a way that China or somebody else makes faster progress” to pointing at a possible apocalypse to distract from reality. It will be worth watching how many of the CASS proposals end up in the official draft AI law once it is introduced to the NPC. Hopefully, the final version maintains this draft’s emphasis on transparency—and immediacy.
Either way, though it is by no means unfiltered, the CASS draft shows that at least parts of China’s AI community are turning over the right stones. That’s also the case in the U.S. The difference is that the Americans making good points do not yet have lawmakers’ full attention. Maybe China’s eventual AI law will change that by inspiring a regulatory layer to the U.S.-China AI competition. For political-logistical reasons, any lessons learned from China’s regulatory experiments would have to remain unsaid.
About DigiChina
Housed within the Program on Geopolitics, Technology and Governance at Stanford University, DigiChina is a cross-organization, collaborative project to provide translations, factual context, and analysis on Chinese technology policy. More at digichina.stanford.edu.