<p><em><strong>By Emily Birnbaum and Laura Davison</strong></em></p>.<p>It’s a jarring political advertisement: Images of a Chinese attack on Taiwan lead into scenes of looted banks and armed soldiers enforcing martial law in San Francisco. A narrator insinuates that it’s all happening under President Joe Biden’s watch.</p>.<p>Those visuals in the Republican National Committee’s ad aren’t real, and the scenarios are pretty obviously fictional. But thanks to the handiwork of artificial intelligence, the images look like real life. Within days of the ad appearing online in April, Representative Yvette Clarke, a New York Democrat, introduced legislation to require disclosure of AI-produced content in political advertisements.</p>.<p>“This is going too far,” she said in an interview. Tiny type in the RNC ad reads, “Built entirely with AI imagery.” Clarke’s bill is going nowhere in a legislature controlled by Republicans, but it illustrates the degree to which the rapid advance of artificial intelligence has put Washington on its back foot.</p>.<p>Voters in the US and around the world are already inundated by AI-generated political content. Click on an email asking for donations, for example, and you may be reading a message drafted by a so-called large language model, political consultants say — the technology behind ChatGPT, the wildly popular chatbot from startup OpenAI. Politicians also increasingly use AI to hasten mundane but critical tasks like analyzing voter rolls, assembling mailing lists and even writing speeches.</p>.<p>As in many industries, AI is poised to increase political workers’ productivity — and probably eliminate more than a few of their jobs. It’s hard to say how many, but the business of politics is full of the sorts of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.</p>.<p><strong>Read | <a href="https://www.deccanherald.com/opinion/in-perspective/generative-ai-is-powering-threats-1234986.html" target="_blank">Generative AI is powering threats</a></strong></p>.<p>But even more ominously, AI holds the potential to supercharge the dissemination of misinformation in political campaigns. The technology is capable of quickly creating so-called “deepfakes,” fake pictures and videos that some political operatives predict will soon be indistinguishable from real ones, enabling miscreants to literally put words in their opponents’ mouths.</p>.<p>Deepfakes have plagued politics for years, but with AI, savvy editing skills are no longer required to create them. </p>.<p>Put to its best use, AI could improve political communications. For instance, upstart campaigns with little cash could use the technology to inexpensively produce campaign materials with fewer staff. Some political consultants that traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns using AI to offer more services at a lower price point.</p>.<p>And the tech industry is trying to combat deepfakes. Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.</p>.<p><strong>‘Knife fight’</strong></p>.<p>In June, Florida Governor Ron DeSantis’s presidential campaign posted an online ad featuring AI-generated images of President Donald Trump hugging and kissing Anthony Fauci. The former director of the National Institute of Allergy and Infectious Diseases is a pariah among Republicans because of his public-health recommendations during the pandemic.</p>.<p>A fact-checking note was appended to the DeSantis campaign’s tweet saying that the images, mixed among real pictures and videos of Trump, were AI-created. DeSantis’s campaign didn’t initially identify them as fake.</p>.<p>In Germany, a far-right party recently distributed AI-generated images of angry immigrants without telling viewers that they weren’t actual photographs. That one got flagged on Twitter as well, but the incident shows how quickly the technology is being adopted for political messaging and the inherent risks, said Juri Schnöller, the managing director of Cosmonauts & Kings, a German political communication firm.</p>.<p>“AI can save or destroy democracy. It’s like a knife fight, right? You can kill someone, or you can make the best dinner,” Schnöller said.</p>.<p>Mix in Russian and Chinese disinformation mills and the concerns grow even more acute, misinformation experts say. Trolls and hackers in those nations already churn out propaganda and lies within their own borders and in countries around the world. </p>.<p>Graphika, a misinformation-tracking firm based in the US, in February found a pro-Chinese influence operation spreading AI-generated video footage of fake news anchors promoting the interests of the Chinese Communist Party. </p>.<p>Rob Joyce, director of cybersecurity at the National Security Agency, said both nation-state actors and cybercriminals have begun experimenting with ChatGPT-like text generation to trick people online. </p>.<p>“That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce said earlier this year. “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test.”</p>.<p>In March, an anonymous Twitter user posted an altered video that went viral, purporting to show Biden verbally attacking transgender people. Another one, circulated widely by a right-wing US pundit, appeared to show Biden ordering a nuclear attack on Russia and sending troops to Ukraine.</p>.<p><strong>Falling behind</strong></p>.<p>Washington is bad at keeping up with emerging technology, much less regulating it. Despite agreeing broadly that Big Tech is too powerful, the two parties have for years been unable to pass any comprehensive legislation to rein in the industry. Between 2021 and 2022, Congress held more than 150 hearings on tech, with little to show for it. </p>.<p>In June, there was a briefing in the Senate called “What is AI?”</p>.<p>The US doesn’t have a federal privacy law and hasn’t updated antitrust laws to account for growing concentration in the tech industry. Lawmakers have been unable to agree on whether — or how — to regulate online speech. </p>.<p>Last month, the Federal Election Commission deadlocked 3-3 on a request to develop rules for AI-generated political ads. Republicans on the panel — which is evenly divided between the parties and routinely finds itself at impasse on controversial matters — said the agency didn’t have explicit authority for the regulations.</p>.<p>Other countries are racing ahead on regulation, spurred into action by the ChatGPT craze. The European Parliament on June 14 voted to restrict the nascent technology’s most anxiety-inducing uses, such as biometric surveillance — AI that can identify people from their faces or bodies. The law, still up for debate, could also require companies to reveal more information about datasets used to train chatbots.</p>.<p>European officials are separately pressing companies including Alphabet Inc.’s Google and Meta Platforms Inc. to label content and images generated by artificial intelligence, in order to help combat disinformation from adversaries like Russia.</p>.<p>Chinese regulators are aggressively imposing new rules on technology companies to ensure Communist Party control over AI and related information available in the country. Every AI model must be submitted for government review before introduction into the market, and synthetically generated content must carry "conspicuous labels," according to a Carnegie Endowment for International Peace paper this week.</p>.<p><strong>Cheaper campaigns</strong></p>.<p>In the best case, AI could make US political campaigns “a lot cheaper,” said Martin Kurucz, the chief executive of Sterling Data Company, which works with Democrats. </p>.<p>The technology is already used to help write first drafts of speeches and op-eds, create ads, draw up lobbying campaigns and more, according to lobbyists, campaign and congressional staffers and political consultants. Art generators like Midjourney, an AI program that generates hyper-realistic images based on text prompts, have the potential to increase productivity or even replace the work of creative teams that can cost thousands of dollars.</p>.<p>While the RNC has already made an attack ad using generative AI, the Democratic National Committee is still experimenting with the technology. A spokesperson said the committee has sent out AI-automated fundraising emails and is considering how to expand its use of AI in the future.</p>.<p>On Capitol Hill, the House Chief Administrative Officer’s digital services office in April handed out 40 licenses for ChatGPT Plus, which House offices have used to help write emails, research briefs, and even draft legislation. Writing full bills is still too complicated a task for generative AI. The House last month created new rules curtailing the use of ChatGPT in Congress, clarifying that staffers cannot put confidential information into the chatbot.</p>.<p>There’s some indication lawmakers are taking the threat of AI more seriously than previous technologies that were poised to upend politics. </p>.<p>After it became clear social media would play a vital role in politics, for example, lawmakers let a decade slide by before they summoned Mark Zuckerberg to testify at a hearing. </p>.<p>OpenAI CEO Sam Altman testified on the Hill in May, less than a year after ChatGPT was opened to the public. He told lawmakers that his industry desperately needs regulation and he’s worried about nefarious uses of artificial intelligence.</p>.<p><strong>‘Won’t know the truth’</strong></p>.<p>OpenAI has noticed an uptick in the use of ChatGPT for political purposes, an OpenAI spokesperson said and has sought to get ahead of concerns its product might be used to deceive voters.</p>.<p>The company published new guidelines in March prohibiting “political campaigning or lobbying” using ChatGPT — including generating campaign materials targeted at particular demographics or producing “high volumes” of materials. Trust and safety teams at OpenAI are trying to identify political uses of the chatbot that violate the company’s policies, the spokesperson said.</p>.<p>The American Association of Political Consultants last month condemned the use of deceptive generative AI in political advertisements, calling it a “threat to democracy.” The group said it plans to condemn and potentially sanction members who develop “deepfake” ads.</p>.<p>But in a society where access to AI tools is widespread and carries little cost, the worst actors are unlikely to be members of a professional association. Frank Luntz, a veteran Republican strategist, said he fears that AI technology will foment voter confusion in the 2024 US presidential contest.</p>.<p>“In politics, the truth is already in short supply,” he said. “Thanks to AI, even those who care about the truth won’t know the truth.”</p>
<p><em><strong>By Emily Birnbaum and Laura Davison</strong></em></p>.<p>It’s a jarring political advertisement: Images of a Chinese attack on Taiwan lead into scenes of looted banks and armed soldiers enforcing martial law in San Francisco. A narrator insinuates that it’s all happening under President Joe Biden’s watch.</p>.<p>Those visuals in the Republican National Committee’s ad aren’t real, and the scenarios are pretty obviously fictional. But thanks to the handiwork of artificial intelligence, the images look like real life. Within days of the ad appearing online in April, Representative Yvette Clarke, a New York Democrat, introduced legislation to require disclosure of AI-produced content in political advertisements.</p>.<p>“This is going too far,” she said in an interview. Tiny type in the RNC ad reads, “Built entirely with AI imagery.” Clarke’s bill is going nowhere in a legislature controlled by Republicans, but it illustrates the degree to which the rapid advance of artificial intelligence has put Washington on its back foot.</p>.<p>Voters in the US and around the world are already inundated by AI-generated political content. Click on an email asking for donations, for example, and you may be reading a message drafted by a so-called large language model, political consultants say — the technology behind ChatGPT, the wildly popular chatbot from startup OpenAI. Politicians also increasingly use AI to hasten mundane but critical tasks like analyzing voter rolls, assembling mailing lists and even writing speeches.</p>.<p>As in many industries, AI is poised to increase political workers’ productivity — and probably eliminate more than a few of their jobs. It’s hard to say how many, but the business of politics is full of the sorts of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.</p>.<p><strong>Read | <a href="https://www.deccanherald.com/opinion/in-perspective/generative-ai-is-powering-threats-1234986.html" target="_blank">Generative AI is powering threats</a></strong></p>.<p>But even more ominously, AI holds the potential to supercharge the dissemination of misinformation in political campaigns. The technology is capable of quickly creating so-called “deepfakes,” fake pictures and videos that some political operatives predict will soon be indistinguishable from real ones, enabling miscreants to literally put words in their opponents’ mouths.</p>.<p>Deepfakes have plagued politics for years, but with AI, savvy editing skills are no longer required to create them. </p>.<p>Put to its best use, AI could improve political communications. For instance, upstart campaigns with little cash could use the technology to inexpensively produce campaign materials with fewer staff. Some political consultants that traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns using AI to offer more services at a lower price point.</p>.<p>And the tech industry is trying to combat deepfakes. Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.</p>.<p><strong>‘Knife fight’</strong></p>.<p>In June, Florida Governor Ron DeSantis’s presidential campaign posted an online ad featuring AI-generated images of President Donald Trump hugging and kissing Anthony Fauci. The former director of the National Institute of Allergy and Infectious Diseases is a pariah among Republicans because of his public-health recommendations during the pandemic.</p>.<p>A fact-checking note was appended to the DeSantis campaign’s tweet saying that the images, mixed among real pictures and videos of Trump, were AI-created. DeSantis’s campaign didn’t initially identify them as fake.</p>.<p>In Germany, a far-right party recently distributed AI-generated images of angry immigrants without telling viewers that they weren’t actual photographs. That one got flagged on Twitter as well, but the incident shows how quickly the technology is being adopted for political messaging and the inherent risks, said Juri Schnöller, the managing director of Cosmonauts & Kings, a German political communication firm.</p>.<p>“AI can save or destroy democracy. It’s like a knife fight, right? You can kill someone, or you can make the best dinner,” Schnöller said.</p>.<p>Mix in Russian and Chinese disinformation mills and the concerns grow even more acute, misinformation experts say. Trolls and hackers in those nations already churn out propaganda and lies within their own borders and in countries around the world. </p>.<p>Graphika, a misinformation-tracking firm based in the US, in February found a pro-Chinese influence operation spreading AI-generated video footage of fake news anchors promoting the interests of the Chinese Communist Party. </p>.<p>Rob Joyce, director of cybersecurity at the National Security Agency, said both nation-state actors and cybercriminals have begun experimenting with ChatGPT-like text generation to trick people online. </p>.<p>“That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce said earlier this year. “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test.”</p>.<p>In March, an anonymous Twitter user posted an altered video that went viral, purporting to show Biden verbally attacking transgender people. Another one, circulated widely by a right-wing US pundit, appeared to show Biden ordering a nuclear attack on Russia and sending troops to Ukraine.</p>.<p><strong>Falling behind</strong></p>.<p>Washington is bad at keeping up with emerging technology, much less regulating it. Despite agreeing broadly that Big Tech is too powerful, the two parties have for years been unable to pass any comprehensive legislation to rein in the industry. Between 2021 and 2022, Congress held more than 150 hearings on tech, with little to show for it. </p>.<p>In June, there was a briefing in the Senate called “What is AI?”</p>.<p>The US doesn’t have a federal privacy law and hasn’t updated antitrust laws to account for growing concentration in the tech industry. Lawmakers have been unable to agree on whether — or how — to regulate online speech. </p>.<p>Last month, the Federal Election Commission deadlocked 3-3 on a request to develop rules for AI-generated political ads. Republicans on the panel — which is evenly divided between the parties and routinely finds itself at impasse on controversial matters — said the agency didn’t have explicit authority for the regulations.</p>.<p>Other countries are racing ahead on regulation, spurred into action by the ChatGPT craze. The European Parliament on June 14 voted to restrict the nascent technology’s most anxiety-inducing uses, such as biometric surveillance — AI that can identify people from their faces or bodies. The law, still up for debate, could also require companies to reveal more information about datasets used to train chatbots.</p>.<p>European officials are separately pressing companies including Alphabet Inc.’s Google and Meta Platforms Inc. to label content and images generated by artificial intelligence, in order to help combat disinformation from adversaries like Russia.</p>.<p>Chinese regulators are aggressively imposing new rules on technology companies to ensure Communist Party control over AI and related information available in the country. Every AI model must be submitted for government review before introduction into the market, and synthetically generated content must carry "conspicuous labels," according to a Carnegie Endowment for International Peace paper this week.</p>.<p><strong>Cheaper campaigns</strong></p>.<p>In the best case, AI could make US political campaigns “a lot cheaper,” said Martin Kurucz, the chief executive of Sterling Data Company, which works with Democrats. </p>.<p>The technology is already used to help write first drafts of speeches and op-eds, create ads, draw up lobbying campaigns and more, according to lobbyists, campaign and congressional staffers and political consultants. Art generators like Midjourney, an AI program that generates hyper-realistic images based on text prompts, have the potential to increase productivity or even replace the work of creative teams that can cost thousands of dollars.</p>.<p>While the RNC has already made an attack ad using generative AI, the Democratic National Committee is still experimenting with the technology. A spokesperson said the committee has sent out AI-automated fundraising emails and is considering how to expand its use of AI in the future.</p>.<p>On Capitol Hill, the House Chief Administrative Officer’s digital services office in April handed out 40 licenses for ChatGPT Plus, which House offices have used to help write emails, research briefs, and even draft legislation. Writing full bills is still too complicated a task for generative AI. The House last month created new rules curtailing the use of ChatGPT in Congress, clarifying that staffers cannot put confidential information into the chatbot.</p>.<p>There’s some indication lawmakers are taking the threat of AI more seriously than previous technologies that were poised to upend politics. </p>.<p>After it became clear social media would play a vital role in politics, for example, lawmakers let a decade slide by before they summoned Mark Zuckerberg to testify at a hearing. </p>.<p>OpenAI CEO Sam Altman testified on the Hill in May, less than a year after ChatGPT was opened to the public. He told lawmakers that his industry desperately needs regulation and he’s worried about nefarious uses of artificial intelligence.</p>.<p><strong>‘Won’t know the truth’</strong></p>.<p>OpenAI has noticed an uptick in the use of ChatGPT for political purposes, an OpenAI spokesperson said and has sought to get ahead of concerns its product might be used to deceive voters.</p>.<p>The company published new guidelines in March prohibiting “political campaigning or lobbying” using ChatGPT — including generating campaign materials targeted at particular demographics or producing “high volumes” of materials. Trust and safety teams at OpenAI are trying to identify political uses of the chatbot that violate the company’s policies, the spokesperson said.</p>.<p>The American Association of Political Consultants last month condemned the use of deceptive generative AI in political advertisements, calling it a “threat to democracy.” The group said it plans to condemn and potentially sanction members who develop “deepfake” ads.</p>.<p>But in a society where access to AI tools is widespread and carries little cost, the worst actors are unlikely to be members of a professional association. Frank Luntz, a veteran Republican strategist, said he fears that AI technology will foment voter confusion in the 2024 US presidential contest.</p>.<p>“In politics, the truth is already in short supply,” he said. “Thanks to AI, even those who care about the truth won’t know the truth.”</p>