Partisanship, employee dissent and ‘corpses’: Inside Facebook’s fight to combat disinformation and hate speech in India

The fact-finding mission, which was described by one of the investigators in an internal document viewed by CNN, took place at an important time for the country and for Facebook’s operations within it. India’s national elections, the most important in the world, were just a few months away – and Facebook was already preparing for potential trouble.

Against that backdrop, Facebook researchers interviewed more than two dozen users and found some underlying issues that could complicate efforts to curb misinformation in India.

“Users were explicit about their motivations to support their political parties,” the researchers wrote in an internal investigative report seen by CNN. “They were also skeptical of the experts as reliable sources. The experts were seen as vulnerable to suspicious targets and motives.”

One person interviewed by investigators was quoted as saying: “As a supporter, you believe what your side says.” Another interviewee, referring to the popular but controversial Prime Minister of India, Narendra Modi, said: “If I receive 50 notifications from Modi, I will share them all.”

Indian Prime Minister Narendra Modi is a prolific user of social media.
The document is part of the disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by legal counsel to Facebook whistleblower Frances Haugen. A consortium of 17 American news organizations, including CNN, has reviewed the redacted versions received by Congress.
The conversations reveal some of the same social problems present in the United States that are sometimes seen as products of algorithmic social media feeds and as complicating factors to improve them. These include nationalist parties, incendiary politicians, polarized communities, and a certain distrust of experts. Has been widespread concerns globally that Facebook has deepening of political divisions and that his efforts to verify information often cause people to redouble their beliefs, some of which were reflected in the research paper. (Most Indian respondents, however, also said they wanted Facebook to “help them identify misinformation on the platform.”)

Facebook also faced two fundamental problems in India that it did not have in the United States, where the company is based: understanding the many local languages ​​and fighting mistrust for operating as an outsider.

In India, English literacy is estimated at about 10%, Facebook’s automated systems are not equipped to handle most of the country’s 22 officially recognized languages, and its teams often miss crucial local context, a fact highlighted in other internal documents and recognized in part by disinformation researchers.

“We faced serious language problems,” the researchers wrote, adding that the majority of users they interviewed had their Facebook profiles set to English, “despite acknowledging how much it hinders their understanding and influences their confidence.”

Some Indian users interviewed by the researchers also said that they did not trust Facebook to provide them with accurate information on local issues. “Facebook was seen as a large international company that would be relatively slow to communicate the best information related to regional news,” the researchers wrote.

Facebook spokesperson Andy Stone told CNN Business that the study was “part of a larger effort” to understand how Indian users reacted to misinformation warning labels in content flagged by third-party fact-checkers from Facebook.

“This work informed a change that we made,” Stone said. “In October 2019 in the US and then we expanded globally shortly after, we started applying more prominent labels.”

Stone said Facebook doesn’t break down content review data by country, but said the company has more than 15,000 people reviewing content around the world, “including in 20 Indian languages.” The company currently partners with 10 independent fact-checking organizations in India, he added.

Disinformation and hate speech warnings in Facebook’s largest marketplace

India is a crucial market for Facebook. With more than 400 million users across the company’s various platforms, the country is Facebook’s largest individual audience.
India has more than 800 million Internet users and roughly 500 million people yet to connect, making it a centerpiece of Facebook’s drive for global growth. Facebook’s expansion in the country includes a Investment of $ 5.7 billion last year to partner with a digital technology company owned by India’s richest man.

But the size and diversity of the country, along with a surge in anti-Muslim sentiment under Modi’s right-wing Hindu nationalist government, have magnified Facebook’s struggles to keep people safe and served as a prime example of its footsteps in false in more volatile developing countries.

India's hundreds of millions of new Internet users have made it a key element in Facebook's global expansion.
Documents obtained by CNN and other media outlets, known as The Facebook Papers, show company researchers and other employees repeatedly pointing out problems related to disinformation and hate speech in India.

For example, Facebook researchers published a report internally earlier this year from the Indian state of Assam, in partnership with local researchers from the Global Voices organization ahead of the April state elections. He noted concerns with “the spread of ethnic, religious and linguistic fear” directed at “targets perceived as ‘Bengali immigrants'” crossing the border from neighboring Bangladesh.

Local investigators found Facebook posts against Bengali speakers in Assam with “many racist comments, including some calling for Hindu Bengalis to be sent ‘back’ to Bangladesh or killed.”

“Bengali-speaking Muslims face the worst in Assam,” local researchers said.

The various Facebook platforms have more than 400 million monthly users in India.

Facebook researchers reported more hate speech and misinformation against Muslims across India. Other documents pointed to “a series of dehumanizing publications” comparing Muslims to “pigs” and “dogs” and false claims that “the Quran requires men to rape their female relatives.”

The company also faced language issues in those posts, with the researchers noting that “our lack of Hindi and Bengali classifiers means that much of this content is never flagged or processed.”

Some of the documents were previously reported by the Wall street journal and another News Departures.

“An Indian Test User’s Descent Into a Sea of ​​Nationalist and Polarizing Messages”

Facebook’s efforts around the 2019 election seemed to pay off largely. In a May 2019 note, Facebook researchers praised the “40 teams and nearly 300 people” who ensured a “surprisingly quiet and uneventful election period.”

Facebook implemented two “glass breaking measures” to stop misinformation and removed more than 65,000 pieces of content for violating the platform’s voter suppression policies, according to the note. But the researchers also noted some loopholes, including on Instagram, which did not have a misinformation reporting category at the time and was not supported by Facebook’s fact-checking tool.

Furthermore, the underlying potential of Facebook platforms to Causing division and damage in the real world in India preceded the elections and continued long after, as did internal concerns about it.

A February 2019 research note titled “An Indian Test User’s Descent Into a Sea of ​​Nationalist and Polarizing Messages” detailed a test account created by Facebook researchers that followed the company’s recommended pages and groups. Within three weeks, the account’s feed was filled with “an almost constant barrage of polarizing nationalist content, misinformation, violence and blood.”

Many of the groups had benign names, but investigators said they began sharing harmful content and misinformation, particularly against citizens of India’s neighbor and rival Pakistan, after a February 14 terror attack in the disputed Kashmir region. between the two countries.

“I have seen more images of dead people in the last 3 weeks than I have seen in my entire life,” wrote one of the researchers.

Facebook’s approach to hate speech in India has been controversial even among its own employees in the country. In August 2020, a Magazine report alleged that Facebook failed to take action on hate speech posts by a member of India’s ruling party, prompting demands for change among many of its employees. (The company he told the newspaper at the time its leaders are “against anti-Muslim hatred and intolerance and welcome the opportunity to continue the conversation on these issues”). In an internal comment thread days after the initial report, several of the company’s workers questioned, in part, its inaction on politicians sharing misinformation and hate speech.

“Since there are a limited number of politicians, it seems inconceivable to me that we don’t even have basic keyword detection set up to detect this kind of thing,” commented one employee. “After all, we cannot be proud as a company if we continue to allow such barbarism to flourish in our network.”

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *