In Ethiopia, Facebook allowed posts inciting violence to go viral for years. The company's response is both dismissive and ineffective. – Media Matters for America

0 Comments

Facebook has been ignoring its failures in the country for years. Now — even after being publicly called out — the company is still doing the bare minimum to save face.
Rhea Bhatnagar

Share
Comment
Facebook’s lax approach to content moderation has resulted in the incitement of violence and the spread of hate speech in Ethiopia — despite the company designating the nation “at risk” and repeatedly proposing to create resources for proper content moderation. Now, the company continues to placate its users by having high-level officials give vague statements and make empty promises while demonstrating minimal actual progress. 
Ethiopia — with approximately 110 million people and over 11 million Facebook users — has been in a state of civil unrest for almost a year. In 2018, Prime Minister Abiy Ahmed came to power and was initially praised for promoting civil liberties, including releasing political prisoners and lifting restrictions on media. However, long-simmering political tensions quickly returned between government forces and the Tigray People’s Liberation Front, which was in power prior to Ahmed’s election, eventually leading to violence. 
The fighting since then has displaced over 1.7 million people in the Tigray region, and the United Nations aid chief warned in October that hundreds of thousands there now face famine. Though human rights groups have found both sides to be responsible for atrocities, the United Nations refugee agency, UNHCR, reported that government forces have committed massacres of ethnic Tigrayans and weaponized sexual assault against thousands of women. Now, the United States government is reportedly considering labeling the actions by Ahmed’s government as a genocide. 
The conflict has also included an escalation of hate speech and incitement to violence online — much of which takes place on Facebook, the most popular social media platform in Ethiopia, where it is regarded as “basically equal to the internet.”  
Both activists and officials have voiced their concerns over the lack of moderation the company had within the nation, especially as it began to devolve into violence. In June 2020, popular activist and singer Hachalu Hundessa was assassinated after being targeted by a social media-based misinformation campaign. Afterward, there were violent protests throughout the country, resulting in several deaths and injuries. 
This was not the first instance of violence in Ethiopia resulting from misinformation spreading on Facebook. According to Vice News, “In October 2019, a viral Facebook post led to the deaths of over 80 people, and in May [2020], the U.N. published a report highlighting the dangers of hate speech on its platform.” 
It is difficult to overstate the scope of Facebook’s role as a source of misinformation in the country. In an April study examining “fake news misinformation and hate speech in Ethiopia,” the European Institute of Peace found that Facebook was responsible for a majority (58%) of all sampled fake news stories in Ethiopian media, while the platform alone accounted for almost 80% of such examples on social media. 
And this problem is not new, either. Ethiopian political mis- and disinformation campaigns have spread for years on Facebook. On the platform, disputes between rival politicians have “ushered an intense wave of political polarization freighted with misinformation,” and the ruling party’s manipulation tactics used to sway public opinion also “serve[d] as a blueprint for opposition groups to attack their opponents and the government.” 
In July 2020, digital rights group Access Now wrote an open letter to Facebook pointing out the platform’s shortcomings in the region and reminding of its past failures in countries such as Myanmar — where the company’s lack of moderation helped fuel a military coup. The group listed specific areas in which Facebook could improve itself to further benefit its users in Ethiopia, such as facilitating access for trusted partners during any internet shutdowns and informing Ethiopian users about reporting mechanisms and relevant platform rules. 
Miranda Sissons, director of human rights policy & engagement at Facebook, wrote back nearly four months later, claiming Ethiopia was on the platforms’ “highest priority tier” and that the company had “researched, built, and deployed multiple interventions” to help minimize “inflammatory content.”
But Facebook has been making vague promises similar to Sissons’ statement for years and has refused to be transparent about the steps it is taking to improve its content moderation in Ethiopia.
Following last month’s release of the Facebook Papers — internal documents made public by whistleblower Frances Haugen showing the company’s myriad failures to moderate its platforms — a spokesperson for Facebook claimed:
Over the past two years, we have actively focused and invested in Ethiopia, adding more staff with local expertise, operational resources, and additional review capacity to expand the number of local languages we support to include Amharic, Oromo, Somali, and Tigrinya. We have worked to improve our proactive detection so that we can remove more harmful content at scale. We have also partnered extensively with international and local experts to better understand and mitigate the biggest risks on the platform.
But researchers and activists are quick to note that Facebook has done the bare minimum to stop hate speech from circulating on its platform.
In 2019, the company opened its first content moderation center in Africa, claiming it would employ 100 people through third-party services to cover all markets on the continent. However, the company did not divulge which languages it would be monitoring and how those services would be allocated; Ethiopia alone has 89 native languages. As recently as September, the platform still hadn’t translated its Community Standards into languages used in Ethiopia.
According to CNN, Facebook has also partnered with “AFP Fact Check and PesaCheck, an East Africa-based non-profit initiative run by Code for Africa,” to hire six full-time fact-checkers to cover four of the country’s language, one of whom recently had to relocate from Ethiopia “due to intimidation.”
And the company’s AI monitoring system is not able to understand the cultural context behind varying slurs or statements posted to its platform, allowing conspiracy theories to persist. Vice News reports that “an internal audit in 2020 found that Facebook did not have automated detection systems for flagging hate speech in either of Ethiopia’s largest languages, Amharic and Oromo.” (The April study by the European Institute of Peace found that “the vast majority of the fake news examples » — 81% — were in Amharic.)
Berhan Taye, an activist in the country and Africa policy lead at Access Now, insists that Facebook needs more moderators who understand what is happening in Ethiopia. 
“When the violence erupts offline, online content that calls for ethnic attacks, discrimination, and destruction of property goes viral,” Taye told Vice News. “Facebook’s inaction helps propagate hate and polarization in a country and has a devastating impact on the narrative and extent of the violence.”
In a recent call with high-level officials at Facebook, a reporter asked how the company calculates its metrics abroad and if it has plans to publicize these figures in the near future. Facebook Vice President of Integrity Guy Rosen gave a non-answer, responding that “more country level metrics” are “definitely something that’s top of mind for us and something that we’re – that we’re going to look into in the future. And we know that it’s something that’s valuable to people and we’re looking to share more. »
But Facebook’s lack of upfront transparency and its lax approach to content moderation in non-English languages has already proven disastrous in Ethiopia. Despite clear examples of the platforms’ missteps and mistakes, the company continues to feign interest in solving these problems only once its failures have been publicly reported.








© 2021 Media Matters for America

source

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Related Posts