<p>Did Facebook turn a blind eye to a network of inauthentic accounts generating fake engagement - likes, shares, comments - to boost the popularity of a BJP MP on the platform?</p>.<p>A report in <em>The Guardian</em>, citing some internal documents accessed by it, has <a href="https://www.theguardian.com/technology/2021/apr/15/facebook-india-bjp-fake-accounts" target="_blank">claimed </a>that the social media giant, which was preparing to remove the fake accounts, hit pause on the action when it found links between the politician and the network. </p>.<p>The social networking site allowed a network of fake accounts to artificially inflate the popularity of the MP for months even after it was alerted about the issue, according to the publication.</p>.<p>Sophie Zhang, a former data scientist at Facebook who unearthed the network, told the publication that all major political parties in India had benefitted from techniques to get fake likes, comments, shares or fans. She added that ahead of the 2019 Lok Sabha elections, she had worked on taking down low-quality scripted fake engagement on political pages across all parties, which led to the removal of 2.2 million reactions, 1.7 million shares and 330,000 comments from inauthentic or compromised accounts.</p>.<p>In December 2019, Zhang discovered four manually controlled networks of fake accounts with links to prominent political leaders. While two of the four were engaged in supporting the Congress, the other two worked to boost the BJP, including the MP in question. </p>.<p>The publication said that an investigator from Facebook’s threat intelligence team recommended that the accounts be sent through an identity 'checkpoint' – a process by which suspicious accounts are locked unless and until the account owner can provide proof of their identity. While one of Facebook's staffers started checkpointing over 500 such accounts related to three out of the four networks, he halted when he got to the 50 accounts related to the fourth network. He alerted Facebook’s task management system by saying, "Just want to confirm we’re comfortable acting on those actors.” He also added that one of the accounts had been tagged by Facebook’s 'Xcheck' system, which is used to flag prominent accounts and exempt them from certain automated enforcement actions, as a 'Government Partner' and 'High Priority – Indian'. </p>.<p>Zhang told the publication that the 'Government Partner' account was that of the MP's, which indicated that either the MP or someone with access to his account was at the helm and handling the fake accounts' network. </p>.<p>She said that she had sought to take action multiple times since the detection and go ahead with the checkpoint but never got a response. </p>.<p> “It seemed quite concerning to myself because the fact that I had caught a politician or someone associated with him red-handed was more of a reason to act, not less,” said Zhang.</p>.<p>“It’s not fair to have one justice system for the rich and important and one for everyone else, but that’s essentially the route that Facebook has carved out,” she said.</p>.<p>In response to the allegation, Facebook spokesperson Liz Bourgeois said: "We fundamentally disagree with Ms Zhang’s characterisation of our priorities and efforts to root out abuse on our platform. We aggressively go after abuse around the world and have specialised teams focused on this work. Over the years, our teams investigated and publicly shared our findings about three CIB takedowns in India. We’ve also continuously detected and taken action against spam and fake engagement in the region, in line with our policies."</p>.<p>When <em>The Guardian</em> asked Facebook about the suspicious network, the company said that "a portion" of the cluster had been disabled in May 2020 and that it was continuing to monitor the rest of the network’s accounts. It later said that a "specialist team" had reviewed the accounts and that a small minority of them had not met the threshold for removal but were nevertheless now inactive. </p>.<p><em>(DH could not independently verify this report)</em></p>
<p>Did Facebook turn a blind eye to a network of inauthentic accounts generating fake engagement - likes, shares, comments - to boost the popularity of a BJP MP on the platform?</p>.<p>A report in <em>The Guardian</em>, citing some internal documents accessed by it, has <a href="https://www.theguardian.com/technology/2021/apr/15/facebook-india-bjp-fake-accounts" target="_blank">claimed </a>that the social media giant, which was preparing to remove the fake accounts, hit pause on the action when it found links between the politician and the network. </p>.<p>The social networking site allowed a network of fake accounts to artificially inflate the popularity of the MP for months even after it was alerted about the issue, according to the publication.</p>.<p>Sophie Zhang, a former data scientist at Facebook who unearthed the network, told the publication that all major political parties in India had benefitted from techniques to get fake likes, comments, shares or fans. She added that ahead of the 2019 Lok Sabha elections, she had worked on taking down low-quality scripted fake engagement on political pages across all parties, which led to the removal of 2.2 million reactions, 1.7 million shares and 330,000 comments from inauthentic or compromised accounts.</p>.<p>In December 2019, Zhang discovered four manually controlled networks of fake accounts with links to prominent political leaders. While two of the four were engaged in supporting the Congress, the other two worked to boost the BJP, including the MP in question. </p>.<p>The publication said that an investigator from Facebook’s threat intelligence team recommended that the accounts be sent through an identity 'checkpoint' – a process by which suspicious accounts are locked unless and until the account owner can provide proof of their identity. While one of Facebook's staffers started checkpointing over 500 such accounts related to three out of the four networks, he halted when he got to the 50 accounts related to the fourth network. He alerted Facebook’s task management system by saying, "Just want to confirm we’re comfortable acting on those actors.” He also added that one of the accounts had been tagged by Facebook’s 'Xcheck' system, which is used to flag prominent accounts and exempt them from certain automated enforcement actions, as a 'Government Partner' and 'High Priority – Indian'. </p>.<p>Zhang told the publication that the 'Government Partner' account was that of the MP's, which indicated that either the MP or someone with access to his account was at the helm and handling the fake accounts' network. </p>.<p>She said that she had sought to take action multiple times since the detection and go ahead with the checkpoint but never got a response. </p>.<p> “It seemed quite concerning to myself because the fact that I had caught a politician or someone associated with him red-handed was more of a reason to act, not less,” said Zhang.</p>.<p>“It’s not fair to have one justice system for the rich and important and one for everyone else, but that’s essentially the route that Facebook has carved out,” she said.</p>.<p>In response to the allegation, Facebook spokesperson Liz Bourgeois said: "We fundamentally disagree with Ms Zhang’s characterisation of our priorities and efforts to root out abuse on our platform. We aggressively go after abuse around the world and have specialised teams focused on this work. Over the years, our teams investigated and publicly shared our findings about three CIB takedowns in India. We’ve also continuously detected and taken action against spam and fake engagement in the region, in line with our policies."</p>.<p>When <em>The Guardian</em> asked Facebook about the suspicious network, the company said that "a portion" of the cluster had been disabled in May 2020 and that it was continuing to monitor the rest of the network’s accounts. It later said that a "specialist team" had reviewed the accounts and that a small minority of them had not met the threshold for removal but were nevertheless now inactive. </p>.<p><em>(DH could not independently verify this report)</em></p>