New Delhi: Meta, the parent company of Facebook, WhatsApp and Instagram, is facing criticism for approving several political ads manipulated by artificial intelligence to spread hate during the ongoing Lok Sabha elections.
The investigation, carried out by Ekō, a corporate accountability organisation and India Civil Watch International, found that Meta allowed AI-manipulated political ads that spread disinformation and incited religious violence, particularly targeting Muslims.
As per a report by two organisations, Meta’s system failed to prohibit inflammatory ads designed to mimic real-life scenarios.
To test Meta’s mechanisms to block inflammatory content during the Lok Sabha elections, both organisations created and submitted 22 advertisements to Meta’s ad library, the database of all advertisements on Facebook and Instagram.
They said that all the advertisements, created through artificial intelligence, violated Meta’s policies on hate speech and misinformation.
Fourteen out of 22 advertisements in English, Hindi, Bengali, Gujarati and Kannada, were approved by Meta within 24 hours, the organisations said.
The investigation was carried out from May 8 to May 13, between the third and fourth phases of the Lok Sabha election.
"All of the approved ads broke Meta’s policies on hate speech, bullying and harassment, misinformation, and violence and incitement," the organisations claimed.
The advertisements targeted Muslims with inflammatory references such as "Let’s burn this vermin" and "Hindu blood is spilling, these invaders must be burned".
Only five ads were rejected for violating Meta’s community standards on hate speech and violence.
The organisations claimed that despite ample evidence of systemic failures and tangible harms documented over the years, Meta failed to implement substantial corrective measures. Ads containing highly inflammatory hate speech, violent rhetoric, and disinformation continue to pass through its approval system.
Despite Meta's claims of prioritising the detection, labelling and removal of violative AI-generated content, this investigation indicated otherwise. Every approved ad featured manipulated AI-generated content without any corresponding label, reinforcing concerns from independent experts that social media companies are not equipped to deal with the risks posed by generative AI spreading disinformation, the organisations claimed.
Earlier Meta claimed it would take all steps to crack down on any misinformation and misuse of AI-generated content during the General Elections in India.
In response to the report, a Meta spokesperson told The Guardian that people who wanted to run ads about elections or politics "must go through the authorisation process required on our platforms and are responsible for complying with all applicable laws".
The Guardian, which first published the study report, quoted the US-based social media giant saying: "When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent fact-checkers – once content is labelled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”