By Kurt Wagner and Jillian Deutsch
Meta Platforms Inc will begin labeling more posts that were created using artificial intelligence tools as part of a broader effort to prevent misinformation and deception from spreading on Facebook, Instagram and Threads during a critical election year.
Meta is working with other tech companies to create common technical standards for identifying these kinds of AI-generated posts, including adding invisible watermarking and metadata to images when they are created, the company said on Tuesday. Meta is then building software systems to detect these invisible markers so that it can label AI-created content even if it was made by a competing service.
Nick Clegg, Meta’s president of global affairs, said that in the next several months he expects Meta will be able to detect and label images that were created using tools from several other AI-focused companies, including Alphabet Inc’s Google, OpenAI, Microsoft Corp, Adobe Inc, Midjourney, and Shutterstock Inc.
2024 is a busy year for elections, with voters in dozens of countries including the US, India, South Africa and Indonesia heading to the polls. While disinformation has been a challenge for voters and candidates for years, it has been turbocharged by the rise of generative AI tools that can create convincing fake images, text and audio.
“I do not want to claim for one moment that this will cover all our bases or cross all the T’s and dot all the I’s,” Clegg told Bloomberg in an interview. “But a flawed approach should not be an alibi for inaction.”
Meta’s system will initially only be able to detect AI-generated images created by other companies’ tools, not audio or video. Images generated by companies that don’t follow industry standards, or those that have been stripped of markers, will also be missed, although Meta is working on a separate way to automatically detect those.
The advancements in detecting AI deepfakes is a top priority for Clegg as Meta prepares for elections, including in the US. Last month at the World Economic Forum in Davos, Switzerland, Clegg said that creating an industry standard around watermarking was “the most urgent task facing us today.”
Last month, a doctored audio message of US President Joe Biden alarmed disinformation experts, with many warning that AI-generated content could play a pivotal role in the upcoming election if it’s not labeled or removed quickly. Clegg said he’s optimistic that won’t happen given the level of focus on the issue.
The presidential candidates’ teams will be on the lookout for deepfakes and “they’re going to shout about it in the press and they’re going to pick up the phone to us,” he said. He added that while Meta does not fact-check original posts created by politicians, it will label AI-generated posts no matter who shares them.
On Monday, Meta’s Oversight Board published a critique of Meta’s manipulated media policy, arguing that it was too narrow and that the company needs to do a better job of labeling posts created by AI instead of trying to remove them. Clegg said he largely agreed with the board’s analysis, and believes the watermarking updates are a step in the right direction.
Eventually, as the web becomes flooded with AI-generated material, the industry will need to tackle the problem from the other side — that is, by labeling legitimate media as well, Clegg said.
“We’ll need to have a society-wide, or certainly industrywide, debate about the other end of the telescope, which is how do you flag for users the veracity or authenticity of non-synthetic content,” he said.