Facebook Fueling Violence And Instability Across The Globe – The Organization for World Peace

0 Comments

Recent revelations show that Facebook has been fueling violence and instability across the globe. As algorithms amplify divisive content and moderating efforts prove ineffective or otherwise negligent, hate speech proliferates across the social media platform. Although Facebook was well aware of these “real world” harms, the company willingly disregarded them in pursuit of profit.
The revelations come from The Wall Street Journal, which has published a series of articles reviewing leaked company documents titled “The Facebook Files.” The documents – including internal research reports, employee discussions, and draft presentations to senior management – were leaked to the Journal  by Frances Haugen, a former product manager at Facebook, who left the company this May.
On the 5th of October, Haugen testified before a U.S. Senate subcommittee on the leak. While the testimony, and subsequent coverage, was principally concerned with the effects of social media on children, Haugen outlined far broader concerns. Facebook, she said, was ‘tearing apart our democracy, putting our children in danger and sewing ethnic violence around the world.’ In Myanmar, India, and Ethiopia, the platform has provided a vehicle for hate speech, and incitements to violence – often with lethal consequences.
Facebook insists that it is ‘opposed to hate speech in all its forms.’ Responding to The Wall Street Journal, spokesman Andy Stone stressed that the company invested significantly in technology to find hate speech across the platform, and noted that such content has been declining on Facebook globally. Stone even seemed to challenge the extent to which Facebook was responsible. Given its global audience, argued Stone, ‘everything that is good, bad and ugly in our societies will find expression on our platform.’ Hatred is perhaps an inevitable reality in our societies, but such assertions understate the importance of Facebook in spreading this hatred. As Haugen explained in her testimony, it ‘is not simply a matter of certain social media users being angry or unstable,’ but algorithms designed by Facebook amplifying divisive content through “engagement-based ranking”.
Across the platform, content is ranked according to user engagement, which Facebook terms “meaningful social interaction,” or MSI. Effectively, posts that attract more likes, comments, and shares are adjudged to have generated more MSI. The algorithm then organizes the “News Feed” to promote content with higher MSI, giving these posts greater visibility on the site.
In 2018, when this system was introduced, Mark Zuckerberg framed the change as promoting ‘personal connections.’ It was aimed at improving the ‘well-being and happiness,’ of users. Instead, internal research found the change cultivating outrage and hatred on the platform. As content that elicits an extreme reaction is more likely to get a click, comment, or re-share, incendiary posts generate the most MSI. The algorithm accordingly amplifies this content across the platform, rewarding divisive content, like misinformation, hate speech and incitements to violence. Such a system entails “real world” consequences. ‘In places like Ethiopia,’ Haugen claimed ‘it is literally fanning ethnic violence.’
Facebook has long been well aware of the impacts associated with its algorithm. Yet, executives have repeatedly disregarded them. In her testimony, Haugen related one such instance, alleging that, in April 2020, Mark Zuckerberg was presented with the option to remove MSI but refused. Zuckerberg purportedly even rejected calls to remove it from Facebook services in countries at risk of violence, including Ethiopia, citing concerns that it might lead to a loss in engagement – despite escalating ethnic tensions in the region. These tensions culminated in the ongoing Tigray conflict. As the hostilities unfolded, groups turned to Facebook, using the platform as a vehicle to incite violence and disseminate hate speech.
When Haugen recounted the role of Facebook in Ethiopia, it prompted outrage on the part of Senator Maria Cantwell. As Cantwell recalled, it was not the first time the company was implicated in ethnic violence within the developing world. In 2018, Facebook was blamed by UN investigators for playing a ‘determining role,’ in the Rohingya crisis. As in Ethiopia years later, the platform provided groups in Myanmar with a vehicle to sew hatred and encourage violence. For over half a decade, the Myanmar military used Facebook to orchestrate a systematic propaganda campaign against the Rohingya minority, portraying them as terrorists and circulating misinformation about imminent attacks. With the beginning of the crisis in August 2017, hate speech exploded on the platform, as the Rohingya were subjected to forced labour, rape, extrajudicial killings, and the displacement of more than 700,000 people. Facebook ultimately issued an apology for its failure to adequately respond to the crisis and pledged that it would do more. But, it seems to have neglected these promises in Ethiopia and elsewhere.
In India, incendiary conflict similarly proliferates across Facebook services, exacerbating the deep-seated social and religious tensions that divide the nation. An internal report in 2019 saw researchers set up a test account as a female user. After following pages and groups recommended by the algorithm, the account’s News Feed became a ‘near constant barrage of polarizing nationalist content, misinformation, and violence.’ In another internal report, the company collected user testimonies to assess the scale of the problem. ‘Most participants,’ the report found, ‘felt that they saw a large amount of content that encourages conflict, hatred and violence.’
Facebook insists that it has a ‘comprehensive strategy,’ to keep people safe on its services, with ‘sophisticated systems,’ in place to combat hate. But, these accounts highlight continued failings in its efforts to moderate content – particularly in developing countries. While these markets now constitute Facebook’s principal source of new users, the company continues to commit fewer resources to content moderation in these regions. In 2020, Facebook employees and contractors spent over 3.2 million hours investigating and addressing misinformation on the platform. Only 13% of that time was dedicated to content from outside the U.S., despite Americans making up less than 10% of the platform’s monthly users.
Meanwhile, the automated systems, which Facebook has repeatedly lauded as the solution to its problem with hate, continue to prove ineffective. Facebook researchers themselves estimate that their A.I. addresses less than 5% of hate speech posted on the platform; while in places like Ethiopia and India, the company neglected to even build systems for several local languages, allowing dangerous content to circulate effectively unmoderated, despite real threats of violence.
More serious still, where this content is identified, the response from Facebook is often inconsistent. The company has been shown as willing to bend its own rules in favour of elites to avoid scandal, even if that means leaving incendiary material on its platform. In one instance, the company refused to remove one Hindu nationalist group, Rashtriya Swayamsevak Sangh (or RSS), despite internal research highlighting its role in promoting violence and hate speech towards Muslims. A report cited ‘political sensitivities,’ as the basis for the decision. India’s Prime Minister, Narendra Modi, worked for the RSS for decades, and in the past year has used threats and legislation as part of a wider attempt to exercise greater control over social media in the country.
‘At the heart of these accusations,’ wrote Zuckerberg in response to the Haugen testimony, ‘is the idea that we prioritize profit over safety and well-being. That’s just not true.’ Yet, these findings show that Zuckerberg, and other executives, repeatedly made decisions not to address harms linked to Facebook. Rather than learn from its failings in places like Myanmar, the company continued to prioritize profit and growth, ignoring the human costs.
Unless the incentives underpinning the economics of social media alter radically, there is little chance that Facebook will pursue the necessary changes independently. As the buoyancy of its share prices, despite the leak, shows, moral integrity does not equate with profit. Regulation is a necessity.
Some states have already taken some form of regulatory action, with more in the pipeline: amongst U.S. lawmakers, calls to reform section 230 are increasingly prominent; a Digital Services Act has been submitted to the European Council; and in the UK, an Online Safety Bill is currently being scrutinized by Parliament.
However, regulation is complex, with diverse approaches entailing distinct legal, administrative and ethical challenges. Pre-eminent among the concerns of policymakers must be freedom of expression. This is particularly pertinent for regulation that proposes to establish rules surrounding content moderation, especially those that include provisions for “harmful” (though not illegal) content – like vaccine misinformation. By requiring social media companies to take down content deemed harmful by the state, such policies could set dangerous precedents that threaten the freedoms of citizens. Proponents of these policies rightly insist that a balance must be struck between freedoms and potential harms. But from a global perspective, it is an especially precarious balance. In more authoritarian states, regulations of this sort might serve less as a means to reduce the harms of social media, as much a tool for silencing dissent. The attempts of Modi to bully social media companies into taking down content related to the Farmers’ Protests should give cause for caution.
An internationally harmonized approach to regulation (like the OECD global tax deal) might blunt potential regulatory excesses, but any agreement would need to be “content-neutral” if it is to be practicable internationally. As attitudes towards policing speech vary massively worldwide, neutrality is the only viable option. Indeed, anything else is unlikely to survive constitutional scrutiny in the U.S.
However, a content-neutral approach is not necessarily an ineffective one. As outlined in this report, one critical element in the problems surrounding Facebook is its algorithm. “Engagement-based” ranking has been shown to amplify incendiary content, and in doing so foster division and sow the seeds of violence. But, there are alternatives to the algorithm. Organizing social media feeds chronologically, for instance, would not limit freedom of expression online, but it would prevent the disproportionate amplification of hate, and policymakers might force social media companies into adopting such alternatives by making them liable for any illegal content amplified on their services. As no system of content moderation could identify every instance, they would likely be forced to scrap algorithmic feeds altogether. This would address the fundamental problem with Facebook: not that hatred exists on the platform (as it inevitably does), but that it is given so much reach.
As is evident in Myanmar, Ethiopia, and India, the prominence given to this hatred has “real world” consequences. Executives at Facebook were well-aware of these consequences but neglected to act upon them, prioritizing profit over people. It is time for regulators to act and put people first.
 
Hundreds and thousands of people trapped in crude camps forced out of Myanmar, a country where they are denied basic human rights and not even
Although official estimates have remained inexact regarding the actual number of Rohingya refugees in Bangladesh at present, Bangladeshi authorities have suggested that ‘the number exceeds
The Rohingya crisis is one that has been ongoing for decades, seeing countless waves of violence and annexes from within their own nation by Myanmar

source

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Related Posts