Examining fake news from technical, economic perspectives

BY DONG SHUHUA and WANG SIWEN | 12-22-2022
Chinese Social Sciences Today

Web-shaped communication network based on algorithm recommendation and social media makes the eradication of false news extremely difficult. Photo: CFP


False news refers to information that is deliberately fabricated, with no verifiable facts, sources, or quotes. It is disseminated with the knowledge that it is false and intended to distort the truth and mislead its audience. There are many factors responsible for the proliferation of false news. This article mainly explores the issue from the perspective of intelligent media technologies and the economic logic of distribution platforms, and proposes potential solutions.


‘Web’ of fake news

Artificial intelligence, big data, and social media have made it harder to identify fake news. Artificial intelligence subverts the traditional model of news production and dissemination. Technologies such as automated algorithms and social bots can quickly produce a vast amount of content which can be embedded in people’s social networks for rapid dissemination in a peer-to-peer manner. 


In the past, fake news was presented as fabricated texts and pictures whose form was relatively simple and people could easy identify it through basic logical reasoning. However, the development of artificial intelligence has enabled the transformation of news from a single form to a multi-modal one. In the process, non-verbal elements such as expressions, movements, and music have become important elements of news content. While multimodal information content produced by smart media can be enjoyable, it also makes identifying fake news much more challenging. For example, in the production of short news videos, “deepfake” and artificial synthesis technologies can be applied to blend multi-modal inputs such as audio, movement, and facial expressions to effectively enhance the communication, but can also be applied to produce fake video content almost indistinguishable from real.


Web-shaped communication networks based on algorithmic recommendations and social media make the eradication of false news extremely difficult. The spread of false news in the era of artificial intelligence more closely resembles weaving a “web” rather than creating a “node.” For example, in order to discredit someone, fake news propagators will create a series of malicious records about him or her, which the media will report. Those who grow suspicious may venture to check the facts, but will be inundated with corroborating fake reports, websites,  screenshots, or other fake “evidence” prepared in advance. To make matters worse, armies of “bots” can be instantly engaged to launch fake news items to a greater number of potential users via “likes” and “forwards,” forming a trending topic and intensifying users’ “algorithmic echo chambers”or “information silos.” Fake news and fake evidence are self-reinforcing, spinning a web that captures users who cannot help believing the authenticity of what they see. As technology advances, such webs will not only grow in size, but also in sophistication.


At the same time, news items on social media may contain elements of truth while critical details may be false. This so-called “misleading information,” is characterized by manipulative, offensive, and political content attributes. During the 2016 US presidential election, such misleading information accounted for the majority of fake news. 


Moreover, the way smart media algorithms recommend personalized content can lead users to experience highly homogenous “information echo chambers.” A lack of reflection on different views and a lack of clear understanding of reality, coupled with the interlocking web of false and factual news, leaves people more vulnerable in terms of identifying and clarifying false news.


Traffic is king

Nowadays, the main channels for people to get news and opinions have shifted from traditional media to social platforms such as Weibo, WeChat, Douyin, and Kuaishou in China. These and other social media platforms operate on “traffic economy” models, where revenues are generated not by user fees, but rather the vast amount of accumulated user data. Traffic and daily active users are where network platforms profit from—more traffic and daily active users can attract more advertising revenue, and the big data generated by users on social platforms is also of great commercial value. Translating users and traffic into advertising revenue has become the most rational and common practice for social media outlets to make money in today’s era.


In the post-truth era, emotions sometimes carry far more power than authentic news. Some social media platforms have begun to exploit this phenomenon to create wealth. In the name of spreading positive energy, they deliberately fabricate fake news and release information that incites netizens’ emotions, such as stirring up controversial topics like gender opposition, regional discrimination, the wealth gap, and family ethics. By doing so they take advantage of netizens’ sense of sympathy, justice, and curiosity for illegal gains. 


An example of such a case was described in the “2021 Research Report on Media Ethics” published by The News Reporter magazine. To attract attention, a Douyin user posted that his parking space was occupied by a BMW for no reason. In his anger, he blocked the BMW with his Land Rover and claimed to have placed a vase worth millions in his car. The words “BMW,” “Land Rover,” and “vase worth millions” quickly aroused netizens’ curiosity about the luxurious lives of the rich. Coupled with the ups and downs of the plot, the user soon gained more than 700,000 followers. Since traffic is paramount for these platforms, they lack sufficient incentives to limit, delete, and punish the spread of sensationalism, thus acquiescing its existence and encouraging its breeding.


When the internet in China was at its early stages of development, most web surfers were those with middle and higher education, and rational discourse on public forums was commonplace. In the past 20 years, however, the demographics of internet usage have shifted drastically, as many with lower education levels began to take part. Hence, the proportion of those with secondary and higher education levels participating in online forums has declined significantly, and implications to civil discourse are evident. 


The advantages of intelligent technology have broken down the restrictions of centralized traditional media, transferring the power of news production and truth definition to the public. Under the traffic economy model, some independent media producers will take advantage of controversial subjects and social conflict to fabricate, intentionally exaggerate, highlight, or omit facts, so as to attract traffic and gain profit. This will inevitably lead to the proliferation of fake news, causing more serious social stratification.


Multi-party participation 

The proliferation of fake news has become a major source of blame for increasingly serious social crises in society, represented by a lack of rationality and constant conflicts in the public sphere. Therefore, the governance of fake news is an urgent task. With the rapid development of intelligent media and the expansion of globalization, traditional governing authorities have been side-stepped. The diversification of stakeholders therefore requires the joint participation of multiple parties in the governance of fake news.


To start with, fact-checking and debunking efforts should be strengthened. Traditional media outlets have already established fact-checking platforms such as the “China Internet Affairs” of Xinhua News Agency, the “Rumor-refuting Broadcast” of CCTV, the “Truth-seeking” of People’s Daily, and the “Rumor-shredder” of China Economic Net. Chinese tech giant Tencent’s “Truth” platform is another important source for fact-checking. In 2018, Tencent also launched the mini program “Truth Dispeller” for WeChat. “Truth” adopts an open verification method, inviting professionals, institutions, and media from all fields to participate in the fact-checking of information, and attracting users to participate in various interactive games. Verification results are marked on the right side of the title of each verified article with one of three classifications: true, doubtful, and false. Similarly, the WeChat official account “Well-documented” not only clarifies false news, but also strives to introduce professional journalistic standards  to the public. 


Despite these efforts, fact-checking should be implemented in a stricter way on mass media and social media platforms. The qualifications of inspection institutions, the scope of inspection, and the rights of the inspected have not been clearly defined. Also, relevant regulations need to be improved to ensure citizens’ right to know, right to express, and right to privacy.


Second, the government should urge platforms and media producers to assume more greater social accountability, and severely punish those responsible for causing harm. Platforms should be encouraged to adopt more diversified algorithmic recommendation mechanisms, adjust the evaluation system of suggested content, and recalibrate the value orientation from maximizing traffic to prioritizing public value. A transparent supervision system for algorithmic content recommendations should be in place so that relevant laws and regulations can be effectively enforced.


Third, the public’s intelligent media literacy  should be on the agenda. Schools and communities can carry out media literacy education to help the public better understand the techniques and motivations of fake news producers, distinguish the authenticity and types of information, and maintain a healthy online public opinion environment with a broad vision. More could be done to cultivate the public’s understanding of content suggestion algorithms, and help them avoid the trap of the “filter bubbles” they create.


Finally, close cooperation between international agencies, governments, and NGOs should be brought into play to build a transnational network for tackling the issue of fake news. The role of relevant UN agencies can be leveraged to smooth communication channels for transnational policy making, share the governance experience of various countries, debate global governance models, and encourage various global actors to participate in decision-making. Only a concerted effort by all countries can this cross-border issue be resolved for the sake of humanity.


Dong Shuhua is from the Shi Liangcai School of Journalism and Communication at Zhejiang Sci-tech University; Wang Siwen is from the School of Journalism and Communication at the Communication University of Zhejiang.




Edited by YANG XUE