<p>Be aware: Fake Twitter accounts will very likely sow disinformation in the few remaining days before Election Day on Nov. 3.</p>.<p>This week, researchers at the University of Southern California released a new study that identified thousands of automated accounts, or “bots,” on Twitter posting information related to President Donald Trump, Joe Biden and their campaigns. The study examined over 240 million election-related tweets from June through September.</p>.<p>Many of these bots, the study said, spread falsehoods related to the coronavirus and far-right conspiracy theories such QAnon and “pizzagate.” The study said that bots accounted for 20% of all tweets involving these political conspiracy theories.</p>.<p>“These bots are an integral part of the discussion” on social media, said Emilio Ferrara, the University of Southern California professor who led the study.</p>.<p>A Twitter spokesman questioned the study’s methods. “Research that uses only publicly available data is deeply flawed by design and often makes egregiously reductive claims based on these limited signals,” the spokesman said. “We continue to confront a changing threat landscape.”</p>.<p><strong>Read: <a href="https://www.deccanherald.com/business/in-proactive-move-twitter-aims-to-pre-bunk-us-election-falsehoods-907530.html" target="_blank">In proactive move, Twitter aims to 'pre-bunk' US election falsehoods</a></strong></p>.<p>Social media companies such as Twitter and Facebook have long worked to remove this kind of activity, which has been used by groups trying to foment discord in past elections in the United States and abroad. And the University of Southern California study showed that about two-thirds of the conspiracy-spreading bots it identified were no longer active by the middle of September.</p>.<p>In some cases, bots exhibit suspicious behavior. They might “follow” an unusually large number of other accounts — a number nearly as large as the number of accounts following them — or their usernames will include random digits.</p>.<p>But identifying bots with the naked eye is far from an exact science. And researchers say that automated accounts have grown more sophisticated in recent months. Typically, they say, bots are driven by a mix of automated software and human operators, who work to orchestrate and vary the behavior of the fake accounts to avoid detection.</p>.<p>Some bots show signs of automation — like only retweeting rather than tweeting new material or posting very frequently — but it can be difficult to definitively prove that accounts are inauthentic, researchers say. An automated account may stop tweeting at night, for example, as if there is a person behind it who is sleeping.</p>.<p>“You can clearly see they are automated,” said Pik-Mai Hui, an Indiana University researcher who has helped build a new set of tools that aim to track these bots in real time. “But they are operated in a way that makes it very difficult to say with complete certainty.”</p>.<p>These bots are operating on both sides of the political spectrum, according to the study from the University of Southern California. But right-leaning bots outnumbered their left-leaning counterparts by a ratio of 4-to-1 in the study, and the right-leaning bots were more than 12 times more likely to spread false conspiracy theories.</p>.<p>The study indicates that 13% of all accounts tweeting about conspiracy theories are automated, and because they tweet at a higher rate, they are sending a much larger proportion of the overall material.</p>.<p>“This is the most concerning part,” Ferrara said. “They are increasing the effect of the echo chamber.”</p>
<p>Be aware: Fake Twitter accounts will very likely sow disinformation in the few remaining days before Election Day on Nov. 3.</p>.<p>This week, researchers at the University of Southern California released a new study that identified thousands of automated accounts, or “bots,” on Twitter posting information related to President Donald Trump, Joe Biden and their campaigns. The study examined over 240 million election-related tweets from June through September.</p>.<p>Many of these bots, the study said, spread falsehoods related to the coronavirus and far-right conspiracy theories such QAnon and “pizzagate.” The study said that bots accounted for 20% of all tweets involving these political conspiracy theories.</p>.<p>“These bots are an integral part of the discussion” on social media, said Emilio Ferrara, the University of Southern California professor who led the study.</p>.<p>A Twitter spokesman questioned the study’s methods. “Research that uses only publicly available data is deeply flawed by design and often makes egregiously reductive claims based on these limited signals,” the spokesman said. “We continue to confront a changing threat landscape.”</p>.<p><strong>Read: <a href="https://www.deccanherald.com/business/in-proactive-move-twitter-aims-to-pre-bunk-us-election-falsehoods-907530.html" target="_blank">In proactive move, Twitter aims to 'pre-bunk' US election falsehoods</a></strong></p>.<p>Social media companies such as Twitter and Facebook have long worked to remove this kind of activity, which has been used by groups trying to foment discord in past elections in the United States and abroad. And the University of Southern California study showed that about two-thirds of the conspiracy-spreading bots it identified were no longer active by the middle of September.</p>.<p>In some cases, bots exhibit suspicious behavior. They might “follow” an unusually large number of other accounts — a number nearly as large as the number of accounts following them — or their usernames will include random digits.</p>.<p>But identifying bots with the naked eye is far from an exact science. And researchers say that automated accounts have grown more sophisticated in recent months. Typically, they say, bots are driven by a mix of automated software and human operators, who work to orchestrate and vary the behavior of the fake accounts to avoid detection.</p>.<p>Some bots show signs of automation — like only retweeting rather than tweeting new material or posting very frequently — but it can be difficult to definitively prove that accounts are inauthentic, researchers say. An automated account may stop tweeting at night, for example, as if there is a person behind it who is sleeping.</p>.<p>“You can clearly see they are automated,” said Pik-Mai Hui, an Indiana University researcher who has helped build a new set of tools that aim to track these bots in real time. “But they are operated in a way that makes it very difficult to say with complete certainty.”</p>.<p>These bots are operating on both sides of the political spectrum, according to the study from the University of Southern California. But right-leaning bots outnumbered their left-leaning counterparts by a ratio of 4-to-1 in the study, and the right-leaning bots were more than 12 times more likely to spread false conspiracy theories.</p>.<p>The study indicates that 13% of all accounts tweeting about conspiracy theories are automated, and because they tweet at a higher rate, they are sending a much larger proportion of the overall material.</p>.<p>“This is the most concerning part,” Ferrara said. “They are increasing the effect of the echo chamber.”</p>