ADVERTISEMENT
TikTok shows less ‘anti-China’ content than rivals, study findsAnalysts created 24 new accounts across ByteDance Ltd.-owned TikTok, Meta Platforms Inc.’s Instagram and Alphabet Inc.’s YouTube, to replicate the experience of American teenagers signing up for social media.
Bloomberg
Last Updated IST
<div class="paragraphs"><p>TikTok, a social media and video sharing application. </p></div>

TikTok, a social media and video sharing application.

Credit: Reuters File Photo

By Alicia Clanton and Aisha Counts

ADVERTISEMENT

Videos condemning or negatively depicting China’s human rights abuses are more difficult to find on TikTok than other rival networks, a new study finds, suggesting that US users may be getting an incomplete picture of the country’s history when searching for key terms or phrases.

US TikTok users who search for terms like “Tiananmen,” “Tibet,” and “Uyghur” — words commonly used in Chinese Communist Party propaganda — see less “anti-China” content than those same searches produce on Instagram and YouTube, according to a new study from the Network Contagion Research Institute at Rutgers University.

Analysts created 24 new accounts across ByteDance Ltd.-owned TikTok, Meta Platforms Inc.’s Instagram and Alphabet Inc.’s YouTube, to replicate the experience of American teenagers signing up for social media. When searching for keywords often related to the country’s human rights abuses, TikTok’s algorithm displayed a higher percentage of positive, neutral or irrelevant content than both Instagram and YouTube, the study found.

“What sets TikTok apart is that the accurate information about China’s human rights abuses are most successfully crowded out on the platform,” says Joel Finkelstein, director and chief science officer of NCRI. In a survey conducted alongside the study, people who used TikTok for three hours or more daily were significantly more positive about China’s human rights record than non-users.

A TikTok spokesperson pushed back on the NCRI’s findings, saying that creating new accounts and searching for these keywords does not reflect a real users’ experience on the app. He also pointed out that TikTok is a newer service than rivals, and some of the incidents happened long before TikTok existed.

“This non-peer reviewed, flawed experiment was clearly engineered to reach a false, predetermined conclusion,” a TikTok spokesperson said in a statement. “Creating fake accounts that interact with the app in a prescribed manner does not reflect real users’ experience, just as this so-called study does not reflect facts or reality.”

TikTok, owned by a Beijing-based company, has faced intense scrutiny from US lawmakers and regulators concerned about the Chinese government’s influence over the social media app and its potential threat to national security. Earlier this year, President Joe Biden signed a law forcing ByteDance to sell the app by Jan. 19 or face a ban in the US.

The idea that TikTok could be used to spread pro-China messaging to American citizens, especially young people, has been a key factor in Congress’s efforts to ban the app. During congressional testimony in March, FBI Director Christopher Wray warned of China’s ability to “conduct influence operations” on TikTok, saying those efforts would be “extraordinarily difficult” to detect. The NCRI researchers acknowledged that their study does not show “definitive proof” that the Chinese government or TikTok employees have intentionally manipulated the algorithm. Hashtags are added to content by users.

The analysis builds on NCRI’s previous findings that TikTok amplifies or demotes content based on whether it aligns with the interests of the Chinese government. That report was cited heavily by US politicians who see the app as a threat to national security. TikTok Chief Executive Officer Shou Zi Chew called that previous report misleading when questioned about the findings during a Senate hearing earlier this year. TikTok pointed Bloomberg to a critique of that research published by the Cato Institute, a libertarian, free-market think tank. (One of Cato’s Institute’s key donors and former board members, Jeffrey S. Yass, is also a significant shareholder in TikTok’s parent company, ByteDance.)

ByteDance and TikTok executives have repeatedly denied allegations that the Chinese government uses the social media app to disseminate propaganda, but those arguments have failed to placate US government officials. TikTok has since sued the US government to overturn that law, arguing that Congress has not substantiated its claims of the app being a national security threat.

The NCRI is an independent non-profit organisation composed of political scientists, security experts and research analysts. The group receives funding from Rutgers University, the British government, and “private philanthropic families,” Finkelstein said.

To conduct the study, researchers collected more than 3,400 videos related to the keywords ‘Uyghur’, ‘Xinjiang’, ‘Tibet’ and ‘Tiananmen,’ terms researchers consider important to the Chinese government’s messaging. Researchers searched for each keyword on TikTok, Instagram and YouTube and viewed the first 300 or so videos that were displayed. From there, each video was classified as either pro-China, anti-China, neutral or irrelevant by up to three human reviewers. Researchers pointed out that their classification of content as pro-China or anti-China involved “subjective judgment.” They further cautioned that “although efforts were made to minimize bias, the potential for interpretative differences remains.”

Videos that highlighted Uyghurs’ plight in China, mentioned Tibetan liberation or contained imagery based on the massacre at Tiananmen Square, were classified as anti-China content by reviewers. Official CCP promotional messages, messages promoting the narrative that Tibet has been liberated and patriotic images of Tiananmen Square with no mention of the massacre, were considered to be pro-China content.

The analysis found that TikTok contained the highest proportion of pro-China content across all three platforms for searches of the words ‘Tibet’ and ‘Tiananmen’.

More than 25 per cent of search results for ‘Tiananmen’ for example, were considered pro-China, which researchers defined as patriotic songs, travel promotions or scenic representations that make no mention of the 1989 massacre there. In comparison, only about 16 per cent of search results on Instagram were pro-China, and just about 8 per cent on YouTube. A spokesperson for Instagram declined to comment. YouTube representatives didn’t immediately respond to a request.

In some cases, Instagram and YouTube showed higher rates of pro-China content than TikTok. For ‘Uyghur’ and ‘Xinjiang,’ about 50 per cent of searches on YouTube returned positive content, compared to less than 25 per cent on TikTok. Researchers attributed the results to a handful of influential accounts created by, or affiliated with, state actors.

ADVERTISEMENT
(Published 10 August 2024, 15:34 IST)