Reconstructing safeguard mechanisms for information security

By Wang Xiaolu / 12-28-2023 / Chinese Social Sciences Today

Ensuring information security Photo: TUCHONG


Ensuring information security can reduce the volatilities within human society. However, with the rapid development of artificial intelligence technology, traditional information security faces new challenges. 


First, an imbalance between information production and information acquisition may arise when humans gather massive amounts of information with the help of AI, posing risks to the security and stability of society and the privacy of citizens. Second, AI with supercomputing capabilities can control massive amounts of information that humans are unable to understand or process, potentially raising new ethical issues or exacerbating existing ones, and ultimately threatening the public order. Third, if AI intrudes into human information systems to manipulate relevant information or implant false data, human judgements or decisions based on that data could be compromised. 


AI technology has elevatied the risk of information leakage and rendered information confrontation increasingly intelligent and secretive. Building a forward-looking information security system will enable the timely identification of information security risks and the mitigation of their adverse effects on national security and economic development. The author advocates for drawing on ecological principles to build safeguard mechanisms for information security by integrating various information elements into an ecologically balanced set. 


Future AI ecosystems are likely to evolve into complex systems wherein multiple types of intelligence coexist and interact, which requires adhering to holistic and ecological information governance approaches. The development and application of new technology should prioritize both security and scalability. Ecological reconstruction of safeguard mechanisms for information security can be achieved through process reengineering and system optimization, with AI technology serving as an important technical support. 


Technology should be utilized rationally on the basis of reflection. Reflecting on technocentrism serves to correct the misconception that technology can resolve all problems, while reflecting on anthropocentrism serves to eliminate humanity’s blind arrogance, contempt for nature, and denial of laws and technology. 


Given the unpredictable consequences of information leakage, information security assurance should focus on prevention. In the era of AI, information dissemination is characterized by rapid speed, secretive modes, and challenges in identifying risks. The construction of safeguard mechanisms for information security should therefore enhance capabilities to address weak signal risks, which involves capturing risk signals, predicting the risk development trajectory, assessment of consequences, and management of risk behavior. 


These four stages of risk management all necessitate the use of new technology. It is thus necessary to consider the current level of AI technology when developing information security risk identification technologies tailored to weak signal environments. 


It is essential to build information security risk intervention mechanisms that align with human values, a shared vision, and shared ethics. The world could face catastrophic consequences if AI systems are hacked or escape human control. For this reason, information infrastructure and critical research infrastructure cannot rely entirely on AI technology. It is necessary to build information disaster recovery mechanisms that incorporate public security and emergency management information, crime hotspots, and medical test results. 


It is also necessary to establish legal and ethical rules for AI products, as this allows their developers to be held legally responsible and helps prevent hidden information security risks in their products. An information supervision mechanism with the government as the primary responsible party should be established to enhance its supervisory role. The extension of the application scenarios of AI technology must be based on market demand to form an integrated ecosystem of technology development and application, information security, and resource allocation. 


The government should formulate and adopt uniform technical standards to ensure targeted information security prevention and control technology, and make full use of AI technology and natural language recognition technology to prevent and mitigate information security risks in a timely manner. 


Wang Xiaolu is an associate professor in the School of Computer Science at Nanjing University of Posts and Telecommunications. 





Edited by WANG YOURAN