, at
By Kaleigh Rogers
Filed under Technology
ILLUSTRATION BY EMILY SCHERER
Facebook’s algorithm1 is its superpower — and its kryptonite. Yes, it leads to higher engagement that earns the company billions of dollars, but it’s also tied to some of the company’s biggest scandals. Last month, when the Facebook Papers — a trove of leaked corporate documents provided to reporters and Congress — were released, a mountain of news coverage blamed the algorithm for the spread of misinformation and divisive content, radicalizing users and failing to protect them from some of the most graphic content on the site.
If the algorithm is to blame, can Facebook change the algorithm to make it better? What would that look like? To find out, I interviewed 12 leading experts on data and computer science, as well as former Facebook employees, and asked them to propose changes that could help the algorithm suck less. What I got was a range of ideas about how Facebook could start to solve this problem, or whether a solution is even possible. Some are more radical than others, so I’ve categorized these ideas from mild to spicy (though we know Facebook CEO Mark Zuckerberg prefers it sweet).
Many experts pointed out that, along with identifying some of the problems with the algorithm, the Facebook Papers also included a number of possible solutions.
“Some of the internal research found shockingly simple tweaks [to improve the algorithm],” said Noah Giansiracusa, a mathematics professor at Bentley University and author of “How Algorithms Create and Prevent Fake News.” “For example, if you limit the number of reshares, that will actually reduce the amount of disinformation.”
Resharing is a crucial way that Facebook gets engaging content into users’ newsfeeds. It allows content to travel through Facebook networks to get in front of users who wouldn’t otherwise see it, and it’s how you wind up with viral content, for better or for worse. Many of the experts I spoke to mentioned creating “friction” or “speed bumps” to slow down bad content — like disinformation, hate speech or extreme content — before it goes viral. Internal research at Facebook found that limiting “deep reshares” (where content is reshared not only by the original poster’s network of friends or followers, but also their friends’ friends, and their friends’ friends’ friends, and so on) of political content could reduce misinformation shared via external links by 25 percent and reduce misinformation found in images (think misleading memes) by half. Facebook implemented these changes, but only temporarily. The site does moderate misleading content by removing it if it violates the site’s community standards, and downranking content that the algorithm deems likely to be harmful, but the experts I spoke to still felt there was a gap where harmful content is slipping through.
Karen Kornbluh, a senior fellow and the director of the Digital Innovation and Democracy Initiative at the German Marshall Fund, suggested Facebook adopt a kind of “circuit breaker,” where the share button is automatically but temporarily removed on content that starts getting deep reshares very quickly, until the content can be evaluated. Something like this, Kornbluh noted, could have stopped the disinformation video “Plandemic” (which falsely cast doubts on the severity of the COVID-19 pandemic and the safety of vaccines) from getting millions of views before it was ultimately removed from the site. Katie Harbath, the former public policy director at Facebook, noted that WhatsApp (which is owned by Facebook’s parent company, Meta) was able to cut the virality of similar kinds of “deep reshared” content by limiting how many contacts a user could forward a message to at one time, and said similar limits on sharing could be helpful at Facebook.
Related: How Are Kids Handling The Pandemic? We Asked Them. Read more. »
The algorithm could also identify “bad actors” who have repeatedly shared misleading content and demote all of their future posts, said Nathaniel Persily, co-director of the Stanford Cyber Policy Center. “They should always be demoted in the newsfeed, regardless if they’re talking about baseball or QAnon,” he said.
Multiple experts also pointed to more prominent user controls, to allow users to decide what content they’d like to see. While Facebook does offer quite a lot of user control options, studies have shown most users are unaware of how they work, and there’s not an intuitive way for users to signal dissatisfaction with content, said Karrie Karahalios, a computer science professor at the University of Illinois at Urbana-Champaign who has studied user experience with Facebook.
Roddy Lindsay, a former Facebook data scientist who went on to co-found a startup, wants the algorithm to prioritize content that users are likely to deem “good for the world.” It’s an admittedly subjective metric, but Facebook experimented with it by having users rate content on whether they felt it was “good” or “bad” for the world. It then used that feedback to train the algorithm to prioritize only the “good” stuff. Facebook researchers found this reduced the amount of negative content in users’ feeds, but it also reduced the number of times users logged onto Facebook, so a watered-down version of it was ultimately adopted instead.
“It’s not that these algorithms can’t be improved,” Lindsay said. “The problem is that the only decision makers for what these algorithms optimize for are the companies.”
We know from the Facebook Papers that some of these recommendations were implemented slowly or for a limited period of time, while others were, at least at the time, rejected. It’s unclear which, if any, of the recommendations from the papers have since been implemented at Facebook — Mari Melguizo, a spokesperson for the platform, pointed to the site’s recently published “content distribution guidelines,” which list the kind of content that’s demoted on the site, including spam and clickbait. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” Melguizo said.
Some experts felt Facebook’s algorithm needed a more substantial overhaul to tackle its worst byproducts, including changes that might make Facebook slightly less fun for users. Laura Edelson, a computer science Ph.D. candidate at New York University who studies disinformation and political advertising on social media, said one thing revealed in the Facebook Papers was the algorithm’s prioritization of something called “downstream meaningful social interactions.” (Which is also how I describe my Friday nights.) In a nutshell, imagine two pieces of content: Post A is something you, the Facebook user, are highly likely to engage with, but nobody else in your network is. Post B is something you’re less likely to engage with, but many people in your network are. By prioritizing downstream MSI, the algorithm is more likely to show you post B, even though it might be less relevant to you. That might expose you to more polarizing or extreme content than you would like to see. Internal research at Facebook showed that reducing how much the algorithm considered this metric when it came to posts about civic (i.e., political) and health information helped reduce the spread of misinformation, and Facebook did implement the changes for those categories (and is currently experimenting with further reducing how much the algorithm considers the potential for comments and shares on political content), but Edelson argued those changes should be made for all content on the site.
“Facebook is not perfect at detecting these categories — far from it — and they’re particularly bad at the beginning of a piece of content’s life,” Edelson said. “That means it’s entirely possible that it won’t detect that civic content is civic until after it’s already gone viral.”
Jinyan Zang, a researcher at the Public Interest Tech Lab at Harvard University, said one thing Facebook could do is shift the balance of what it deems valuable. Rather than focusing on quantitative metrics like clicks and reshares, the algorithm could prioritize qualitative metrics (like how positive or relevant a post might be to a user). You might be more likely to engage with your cousin’s college boyfriend’s post about a conspiracy theory, but you might prefer to see a photo of your neighbor’s kids’ Halloween costumes. Facebook’s algorithm takes a lot of factors into account when ranking content, including more qualitative measures, so there’s no reason it couldn’t crank up that dial.
Another more dramatic change would be eliminating the ranking algorithm for the newsfeed altogether, and returning to a reverse-chronological feed. In other words, just show everybody everything people posted, rather than trying to personalize the feed just for you (and whatever the algorithm thinks you’re most likely to click, or rage-click). This notion is controversial. Some of the experts I spoke to said it would never work because it incentivizes quantity over quality — a fast road to spam — while also making it less likely that you’ll see anything relevant, interesting or engaging (in every sense of the word) on your feed.
But proponents of this idea (including Facebook whistleblower Frances Haugen) say the downsides to reverting to a reverse chronological feed may be outweighed by the benefits. A reverse chronological feed would, by definition, favor content from users who post frequently, so your newsfeed could easily get clogged with posts from a particularly active group you’re in, or memes from a particularly bored acquaintance you forgot you friended. “You would be exposed to things that are more mundane, but that content plays an important role — we’re not all having the best day ever,” Lindsay, the former Facebook data scientist, said. “People say it’s boring or noisy or has content from random pages, and my response to that is, ‘Yes, of course, but that’s where user controls come in.’”
At the furthest end of the take Scoville scale are two ideas, one optimistic and the other pessimistic:
Jeff Allen and Sahar Massachi are both former Facebook employees who worked on integrity teams (the groups tasked with finding ways to deal with all of the worst bits of Facebook, like disinformation and extremism). They said that as long as Facebook’s mission (“to give people the power to build community and bring the world closer together”) and metrics (user engagement) are at odds, there’s no amount of algorithmic tweaks that will solve its problems. Instead, Facebook needs to use different metrics that align with its stated values to measure its success.
“If you’re counting harmful content towards your success, you’re just setting yourself up for internal conflict,” Allen said.
But some experts felt that the blue sky version of Facebook realigning its metrics with its values simply wouldn’t work. I asked each of the interviewees what they would do if I waved a magic wand and gave them total control over Facebook. “I would turn off Facebook and apologize to the people of the world,” said Cathy O’Neil, a data scientist and algorithmic auditor. “They can’t actually solve their problems.”
O’Neil argued that as long as Facebook is a for-profit company that earns revenue through ads, it will only ever be able to play catchup with negative content, and that any external pressures like regulation that would actually be effective would only end up making Facebook so unprofitable that the business would collapse. Imran Ahmed, the founding CEO of the Center for Countering Digital Hate, made a similar argument, noting that asking Facebook to make changes to an algorithm that — from a business perspective — works quite effectively is “the axiom of insanity.” Instead, he called for regulations that would create costs to Facebook for the harm its product creates as an incentive for change.
“The cost of the harms created by Facebook are not in any way internal to Facebook. Users pay the price and society pays the price,” Ahmed said. “Impunity leads to terrible things, and we’re seeing an experiment in impunity now. “
In fact, the experts I spoke to almost unanimously called for regulation. Blue sky ideas are great, but we’ve been trusting Facebook to get better for 15 years, and it’s arguably worse than it’s ever been. It might be time to put some limits on its superpower, and its supreme power.
Facebook’s platform runs on a system of many algorithms that all have different functions and interact with one another. This system is colloquially referred to as “the algorithm.”
Kaleigh Rogers is FiveThirtyEight’s technology and politics reporter.
Filed under
Technology (43 posts) Facebook (28) Algorithms (7) Big Tech (3)
See all newsletters
© 2021 ABC News Internet Ventures. All rights reserved.