Hours after the Israel-Hamas conflict erupted on Oct 7, Bharat Nayak, a fact-checker in the east Indian state of Jharkhand, noticed a surge of disinformation and hate speech directed at Muslims on his dashboard of WhatsApp messages.
The viral messages from hundreds of public WhatsApp groups in India contained graphic images and videos, including many from Syria and Afghanistan falsely labelled as being from Israel, with captions in Hindi that called Muslims evil.
"They are using the crisis to spread misinformation against Muslims, saying they will attack Hindus in a similar way, and to falsely accuse opposition parties and others of supporting Hamas, and calling for their elimination," Nayak said.
"The content is very graphic, the messaging is extreme, and it gets forwarded many times, as there is no content moderation on WhatsApp" he told the Thomson Reuters Foundation.
The conflict, that has killed over 1,400 people in Israel and more than 8,000 in the Gaza Strip, has triggered a surge in disinformation and hate speech against Muslims and Jews across social media platforms from India to China to the United States.
Meta and X, formerly known as Twitter, said they have removed tens of thousands of posts, but the volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English, say digital rights experts.
"We've tirelessly drawn their attention to these issues over the years, but social media platforms continue to fall short when it comes to combating hate speech, incitement and disinformation," said Mona Shtaya, a nonresident fellow at The Tahrir Institute for Middle East Policy, a non-profit.
"The recent layoffs in trust and safety teams across platforms underscore this deficiency," she said.
"Additionally, their resource allocation - based on market size, rather than assessed risks - exacerbates the challenges faced by marginalised communities including Palestinians and others."
In a blog post, Meta - which owns Facebook, Instagram and WhatsApp - said it had "quickly established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers," and that it is working with third-party fact-checkers in the region "to debunk false claims".
X, formerly known as Twitter, did not respond to a request for comment.
Real-world harms
Failures of content moderation are not limited to the decades-long Israel-Palestine conflict.
UN human rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fuelled violence against the ethnic Rohingya community in Myanmar in 2017.
Rohingya refugees in 2021 sued Meta for $150 billion over allegations that the company's failures to police content, and its platform's design contributed to the real-world violence.
Meta has acknowledged being "too slow" to act in Myanmar.
Last year, a lawsuit against Meta filed in Kenya accused the platform of allowing violent and hateful posts from Ethiopia on Facebook, and its recommendation systems of amplifying violent posts that inflamed the Ethiopian civil war.
The company has faced similar accusations related to violence in Sri Lanka, India, Indonesia and Cambodia.
The surge in disinformation during the current Israel-Hamas conflict underscores that "platforms do not have the right systems in place," said Sabhanaz Rashid Diya, former head of policy at Meta for Bangladesh.
"The historical under-investment in specific parts of the world and specific languages is now being tested in this crisis," said Diya, founding board director of Tech Global Institute, a thinktank.
"Some of the challenges we're seeing around the information ecosystem are consequences of not building capacity; these are consequences of automated systems, staffing issues; not having sufficient fact-checkers in these markets; not having policies that are contextualised for local regions," she added.
Whack-a-mole
The Arab Centre for Social Media Advancement, or 7amleh, has documented more than half a million instances in Hebrew of hate speech and incitement to violence against Palestinians and their supporters.
There is also an over 50-fold increase in the absolute volume of anti-Semitic comments on YouTube videos, the Institute for Strategic Dialogue in London said in a report this week.
State-affiliated accounts of Iran, Russia and China are also spreading disinformation and hate speech on Facebook and X, it said, adding that it could contribute to "polarisation and deepening mistrust towards democratic institutions and the media."
Reports of anti-Semitic and Islamophobic incidents have surged worldwide, including assaults, vandalism and the fatal stabbing of a 6-year-old Palestinian boy in the United States.
They are a result of the hate speech online, said Marc Owen Jones, who researches disinformation in the Middle East.
"Much of the disinformation is violent, graphic and highly emotive - designed to provoke polarisation and turn people against each other," said Jones, an associate professor at Hamad bin Khalifa University in Qatar.
It is "driving a sense of righteousness and tribalism that contributes to violence, as we've seen as far away as Dagestan and Illinois. The upshot is dire," said Jones.
Yet despite heated conversations around the need for better content moderation, trust and safety is "resource-intensive, meaning that tackling the issue is a challenge for any platform," said Yu-Lan Scholliers, head of product at Checkstep, a UK-based content moderation services firm.
With easy access to AI, "it's now much easier to generate real-looking but fake content - requiring more advanced detection mechanisms," said Scholliers, who previously worked in Meta's product data science team.
But even if platforms invested heavily in their trust and safety teams, the main challenge "is and will be adversarial behaviour - users always find more and more creative ways to avoid detection," she said.
"It is a whack-a-mole that can never be fully solved."