Improving generative AI regulation

By Zhou Weiming / 08-17-2023 / Chinese Social Sciences Today

The 2023 World Artificial Intelligence Conference was held from July 6 to 8 in Shanghai. Photo: Chen Yuyu/CNSphoto


Recently, there has been growing recognition of the potential social risks associated with generative artificial intelligence. Concerns have been raised regarding issues such as data privacy and algorithm security. In response to these concerns, the European Commission proposed the Artificial Intelligence Act in April 2021, and China issued the Interim Measures for the Management of Generative Artificial Intelligence Services in July 2023. However, current regulatory legislation is not fully prepared for the emergence of generative AI, with several common inadequacies to be remedied in the regulations promulgated by different countries. 


Inadequacies 

While it is widely acknowledged that generative AI poses certain risks, such as data breaches, misinformation, and noncompliance with intellectual property laws, consensus has not been reached on who should be held liable for which risks, and in what ways. Given the chain-like nature of generative AI from content production to delivery, it is necessary to build a unified framework of liability that allows the affected parties to file liability claims throughout the entire chain of liability. Although there have been some legal attempts to define liability, they are often sporadic and unsystematic because legislative concepts are not updated in a timely manner. 


Despite the heated debate over AI risks, legislative actions have been very slow to materialize. Moreover, existing discussions, laws and regulations focus on salient topical issues such as algorithmic discrimination, privacy, and liability, without adequately addressing potential long-term issues. It is important to realize that our understanding of the capabilities of generative AI is limited to what we are actively testing and exploring. Its actual capabilities may be far beyond our imagination and come with unknown risks. 


Present legislation emphasizes intervention at the pre-training level to ensure that AI-generated outcomes are in line with regulatory requirements. However, research suggests focusing on application, particularly high-risk applications, could be more effective. As generative AI models are highly variable and versatile, developers may not be capable of anticipating and comprehensively evaluating potential risks. 


Since it is still technically impossible to preclude high-risk applications, generative AI is likely to be viewed as a high-risk system. This means developers are at constant risk of being held accountable under the current regulatory framework, which is too heavy a burden for them, and not conducive to the heathy development of generative AI. Nonetheless, two issues should be addressed at the pre-training level. One involves algorithmic non-discrimination, and the other involves data security and personal information protection. 


Remedies 

Generative AI liability should be defined in a unified piece of legislation to prevent conflicts between different laws and the difficulties experienced by victims in presenting evidence. A key issue in legislation is to balance technological innovation and protection of rights. In this regard, two aspects need attention. 


The first aspect concerns reasonable application of the principles of imputation in order to determine the scope and extent of protection of rights in a reasonable manner. The second aspect concerns sufficient public space for sustainable technological innovation. It is necessary to fully consider the need for industrial development and technological innovation as well as the innovative enthusiasm of generative AI developers, achieving a dynamic balance between developer protection and victim protection based on the principle of proportionality. 


Legislators should continue to monitor the upgrading of generative AI and evaluate potential risks so that they can learn through regulation. The evolving characteristics of information concerning emerging technologies and their applications should be taken into account to design a more adaptive and forward-looking regulatory framework. 


It is unreasonable to demand that legislators anticipate all potential risks. Revising legislation promptly to accommodate new circumstances is also challenging due to the complexity of legislative processes. Therefore, we should consider building a flexible regulatory framework, utilizing blank norms and general terms to leave room for future regulation. At the same time, it is essential to strike a balance between the clarity and ambiguity of the clauses in this process. 


It is unrealistic to require generative AI developers to build an all-encompassing risk management system that is capable of preventing any future risk. A better approach would involve collaboration between developers (providers) and authorized users of high-risk applications, such as the generation of sensitive content and advice on socio-economic decisions. While necessary information is shared among all parties, developers (providers) take responsibility for verifying the authorization, and users are liable for misuse.


Zhou Weiming is a research associate from the China Institute of Applied Jurisprudence. 


Edited by WANG YOURAN