India, which is set to become the world’s most populous nation in 2023, is a critical market for Meta. In an email interview with DH’s Dhanya Skariachan, Natasha Jog — Head, Public Policy, Instagram and Policy Programs, Facebook India (Meta) — shed light on its efforts to improve user safety. Edited excerpts.
What is Meta’s future blueprint for user safety across platforms in India?
Today we have more than 750 million internet users and by 2024, this number is expected to reach 900 million. With India at the cusp of being a $1 trillion digital economy, there is a need to protect new as well as existing users from various kinds of harm. Therefore, we take our role seriously in ensuring that we are constantly working towards building a platform which is safe and have invested more than $16 billion since 2016 on improving safety and security of users across Meta’s technologies. To build on this commitment, we have recently launched #DigitalSuraksha, which is our umbrella campaign for safety to create an ecosystem which is safe and inclusive for women, youth and other users as well. The first phase of #DigitalSuraksha includes a partnership with Delhi Police to provide digital literacy to users in Delhi. The current phase of #DigitalSuraksha also includes several measures on enabling digital education and literacy across youth well-being, child safety, tackling misinformation as well as the overall education of users towards various safety tools and resources available across Meta platforms. We have also partnered with the Ministry of Electronics and Information Technology (MeitY) for the G20 Stay Safe online campaign under which we are creating and sharing on our platforms resources in multiple Indian languages to spread awareness on how to stay safe online. Throughout this year, Meta will continue to add more initiatives to the #DigitalSuraksha campaign. These will be geared towards reaching a larger set of users by educating them, creating more awareness and advancing India’s agenda of digital inclusion and growth.
Also Read | Meta rolling out its paid verification in US
Meta’s user base seems to have taken a hit in recent times, with many leaving the platform due to safety and privacy concerns, nudity, language barriers and diminishing appeal among people seeking video content, according to media reports. How do you plan to win them back?
This is simply not true. Our user base on all three apps is growing in India. On Facebook, we now reach 2 billion daily active users globally, which is the highest ever, and in India, we have more than 440 million people that access Facebook every month. In fact, we have seen a steady growth trend towards women entrepreneurs using our apps in India over the past three years. As much as 40% of Facebook groups related to entrepreneurship have been created by women, 53% of all business pages on Facebook have female admins and on Instagram, 73% of business accounts that self-identify as women-owned business, have been set up in the last three years. Further, 23% of Spark AR creators publishing effects for Facebook and Instagram are women.
How can women users protect themselves from online harassment?
We don’t tolerate bullying or harassment on our platforms and have introduced comment moderation, reporting, and appeal tools so people can better control unwanted, offensive or hurtful experiences on Facebook. Some of the tools and measures which women can use are ‘Limits’ (on Instagram), which was launched to help protect people when they experience or anticipate a rush of abusive comments and DMs. It automatically hides comments and DM requests from people that don’t follow you or recently started following you. On Instagram, you can hide message requests that contain offensive terms by creating a custom list of words, phrases, or emojis. These requests will go to a separate folder, so people can avoid being confronted by language they don’t want to see. On Facebook, one can use Profanity Filter settings, and create a list of blocked keywords. Comments containing those keywords do not appear on posts. Users can choose up to 1,000 keywords in any language to block from their page. With DM controls, you can select who can reach you via Direct Message on Instagram and on Facebook, one has the power to control whether visitors can post on your page. Through comment controls on Instagram, users can manage unwanted interactions in comments, in bulk. We have also launched a Women’s Safety Hub in Hindi and 11 other Indian languages.
How safe are the pictures shared on social media platforms?
In India, both people who use our services and safety advocates told us that some women in the country choose not to share profile pictures that include their faces because they’re concerned that someone might misuse them. Acting on this feedback, we introduced ‘Lock Your Profile’. When a user locks his/her profile, people who are not friends will have a limited view of their Facebook content. Besides, non-friends will also not be able to zoom into, share, or download profile photos. We also launched StopNCII to combat the spread of non-consensual intimate images (NCII). In partnership with UK Revenge Porn Helpline, StopNCII.org builds on Meta’s NCII Pilot, an emergency program that allows potential victims to proactively hash their intimate images so they can’t be proliferated on its platforms. People on Instagram own their photos and videos, but like anything posted publicly on the internet, there is a risk others will use that information or content without permission. Our approach is to give people the controls to protect their content and accounts.