The Future of Platform Power: Quarantining Misinformation

Issue Date July 2021
Volume 32
Issue 3
Page Numbers 152-56
file Print
arrow-down-thin Download from Project MUSE
external View Citation

Amid the roiling debate over the impact of the internet on democracy, we reflect on a recent contribution to this topic in these pages by Francis Fukuyama, who proposes that internet companies open their platforms to outside content-moderation services. We support exploring new approaches that promote greater user control and autonomy on social media platforms but take issue with his narrow definition of the problem. While the core issues of agency and power in democratic systems will not be resolved by better technology, improving design can make a positive impact. Whole-of-society problems will ultimately require whole-of-society thinking and action.

This essay is a part of an exchange based on Francis Fukuyama’s “Making the Internet Safe for Democracy” from the April 2021 issue of the Journal of Democracy.

Far removed from the techno-optimism inspired a decade ago by prodemocracy movements using digital tools to organize, today’s discussions are mired in a gloomier—and more realistic—place. Scholars and pundits now frequently argue that the internet, in its current form, is a threat to democracy. Much of their critique takes aim at the consolidation of power in the hands of large internet platforms such as Facebook, Twitter, and Google, and what these companies do—and do not do—with this power.

The eminent political scientist Francis Fukuyama lends his voice to the coalition of the concerned in a recent article in these pages, summarizing the work of a Stanford-based group of scholars who examined the scale, scope, and power of platforms, the potential for their abuse, and possible remedies. Arguing that the key threat to democracy is the platforms’ potential to sway election outcomes, he proposes a strategy to reduce platform power: Require companies to open their platforms to outside content-moderation services called “middleware,” thereby giving users more options to curate the material they see online.

From our perspective, Fukuyama’s diagnosis of the digital threats to democracy is far too narrow. He effectively dismisses the impact and threat of bad actors that exploit the openness of online forums to distort and manipulate political debate and suppress participation by historically marginalized voices through harassment and threats. Fukuyama also sidesteps the “elephant in the room”: right-wing political forces in the United States that have embraced a decidedly antidemocratic strategy which leverages the amplification power of social media to sway public [End Page 152] attention. This same problem extends around the globe. Governments in India, Turkey, Brazil, and the Philippines, to name a few, have applied authoritarian impulses to the internet, manipulating online discourse and harassing opponents. While there are numerous digital threats to democracy, more technology cannot solve the problem of misinformation-at-scale. On the positive side, middleware would offer users greater choice in terms of how the content they encounter is moderated. Yet we should also anticipate that middleware will be coopted by the same forces that are currently working to undermine democracy with all the tools at their disposal.

About the Authors

Robert Faris

Robert Faris is senior researcher at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy.

View all work by Robert Faris

Joan Donovan

 Joan Donovan is research director at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy

View all work by Joan Donovan

In the United States, where policy makers have long been growing wary of the immense power of internet platforms, openness to new regulatory action is building on both sides of the aisle. This might include antitrust action or revisions to Section 230 of the 1996 Communications Decency Act, which offers platforms protection against liability for their users’ actions. While many laud a growing bipartisan consensus that large internet platforms are not serving the public interest, this consensus is no more than a veneer covering profound disagreement along political lines.

Critics on the political left are far more likely to assert that platforms are not doing enough to root out extremism, online abuse, and disinformation. This perspective is informed by the brazen disinformation campaigns that circulated on social-media platforms during the 2016 and 2020 U.S. elections, and more recently by the flood of medical disinformation related to the covid-19 pandemic. A core argument in favor of greater platform action, to which our research has contributed, is that the openness of the internet enables malicious actors to build their power by spreading disinformation, exacerbating social divisions, and harassing or threatening their opponents into silence, and that big-tech companies bear a responsibility to address harms perpetrated on their platforms. Adding further cause for concern, operating disinformation campaigns is politically expedient and immensely profitable.

The critique from the right is that platforms disproportionately and unfairly target conservative voices. In this view, rather than not doing enough, content moderation is going too far. Emblematic of this divide are opinions on the fate of Donald Trump’s social-media accounts. Even among those on the left who are uneasy about the power of platforms and the troubling precedent of selectively cutting off political leaders, there is support for enforceable standards of conduct that apply even to presidents. Many on the right, meanwhile, view the deplatforming of Trump and other right-wing figures as motivated solely by partisanship. We should be clear that Trump and his allies exploited the openness and scale of social media in an attempt to overturn the results of the 2020 presidential election. His call to action leading up to the January 6 insurrection would not have been as effective without [End Page 153] platforms’ power to facilitate mass protests—a feature that we celebrated until recently.

Although he does not frame it in these terms, Fukuyama enters into this debate and sides with the conservatives. Referring to the false claims by Trump supporters that the Democrats stole the 2020 election, Fukuyama writes that it is “neither normatively acceptable nor practically possible to prevent them from expressing opinions to this effect. For better or worse, people holding such views need to be persuaded, and not simply suppressed.” Noting that we cannot rely on Twitter CEO Jack Dorsey or Facebook’s Mark Zuckerberg to do “the right thing,” he then shifts attention to the “underlying problem, which is one of excessively concentrated power” (p. 39). This feels a bit like rhetorical sleight of hand. In essence, Fukuyama asserts that platforms should not moderate political speech and that it would not work anyway, so we should focus on the accumulation of power by platforms. But when politicians gain clout by abusing digital services, perhaps it is not the tech companies whose power should worry us the most.

Fukuyama seems to be making a renewed argument in favor of the marketplace of ideas as the best hope for U.S. democracy, or perhaps simply throwing in the towel on addressing political disinformation. In our view, the evidence clearly shows that the marketplace of ideas in the United States is broken and that more speech, something which the internet has been wildly successful at producing, has not been a remedy for bad speech. In a letter to the United Nations regarding disinformation’s impact on freedom of expression, our research team argues that disinformation is displacing the truth by merit of both its sheer volume and campaign operators’ success in exploiting social media’s very design.1

There is an institutionalist argument that we need gatekeepers for the marketplace of ideas to function well, an argument that Fukuyama has made in the past. Internet platforms have uneasily moved into this role in response to a combination of market and societal pressures. A key question is whether they should be gatekeepers at all. Critics of the current system rightly point out that social-media platforms make their decisions with profitability in mind, an incentive structure that does not always align with the public interest. It is also true that the decisions companies make are shaped by political and social pressures. In fact, politics shapes technology recursively as technology shapes politics, a dynamic that has played out around the globe to the detriment of journalism, democracy, and public health.

Right now, the platforms are attempting to enforce norms of behavior that they are making up as they go. Many commentators take issue with companies’ relative lack of transparency in developing and applying content-moderation policies, as well as the many errors in implementation. Given the central role that platforms play in modern public life, [End Page 154] there is also widespread disquiet over private actors serving as arbiters of truth and acceptable conduct. Yet outside of a few platforms that specialize in hosting extremist content, very few argue that lines should not be drawn marking at least some conduct as impermissible. As a cost of their success, platforms must now balance the needs and desires of a vast number of stakeholders, many of them highly vocal. The ongoing debate—likely never to be resolved—is over where exactly to draw the lines and how to better ensure that content-moderation standards serve the public interest, balancing the value of free speech against the damage to institutions and individuals wrought by saboteurs, scammers, and trolls.

We can agree that the current system is not up to the challenges at hand, that it is at times unjust and arbitrary, and that platforms have a responsibility to do better. That in itself, however, is not a good argument for dismantling the system or paring it back before better alternatives can be identified.

Allowing users to choose outside algorithms to curate their information feeds is not a new idea, but it does have considerable allure. Such a system could tap into the underutilized expertise of librarians in sorting out knowledge from information. Hiring thousands of librarians could form the core of a middleware industry that ensures that a semblance of the truth still circulates.

A government-mandated middleware architecture would face its own set of political hurdles, but enabling such services to “plug in” is something that platforms could explore voluntarily. Limited experiments would give a better understanding of how middleware services operate in real-life settings and whether the benefits exceed the costs. Researchers must also consider how effective giving users a choice of filters will be when many may know little about how algorithms curate content in the first place.

The middleware concept would functionally break up platforms into smaller pieces with different priorities and policies when it comes to presenting ideas, viewpoints, and sources of information. It is unclear, however, how much this would vary from the current system, in which algorithms are informed by user choices and tailored social-media feeds serve each user differently. The middleware approach would more explicitly embrace fragmentation and would work against the notion of a unified public sphere. In essence, it is fragmentation by design. How this would work in practice is anyone’s guess. An interesting question is how many users will end up in echo chambers when given the express [End Page 155] option of taking refuge there, rather than being gently nudged into one. There is the real possibility that such a system will only feed radicalization and polarization further.

The impact of middleware would depend on who steps forward to offer products and who wins subscribers. In a rose-tinted vision of the future, a large majority of social-media users would select middleware providers that serve up the best that journalism has to offer and relatively few would choose middleware that inundated consumers with disinformation-ridden, hyperpartisan clickbait. A more likely scenario is that users would choose middleware that strongly resembles their current media diets.

If users had more choices in curating their digital worlds, would platforms be any less responsible for the damage caused by malicious actors? How would outsourcing content curation to third parties help to make it more accountable and attuned to the public interest? Would these new governors, as Kate Klonick calls online platforms, be any less likely to abuse their positions of power to disrupt and distort democratic systems?2 The vexing question of where to set the floor would remain. Would the platforms see the addition of third-party algorithms as a justification to loosen their own moderation of racists and white supremacists?

The emergence of digital platforms has not altered the oldest and most profound challenges of democracy: how to empower citizens to participate in collectively charting a course for the nation, while keeping in check the concentration of political power and preventing the emergence of oppressive forces. In the digital realm, there is much to be done to ensure that all have equal footing to participate in democratic processes and to prevent manipulators, scammers, operatives, and trolls from playing an outsized role in public debate. In the immediate term, continuing to demand that platforms address manipulation and abuse on their platforms is as clear an articulation of democracy as there is in the digital age.

 

NOTES

1. Submission to the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression by the Technology and Social Change Project, 15 February 2021, https://mediamanipulation.org/research/submission-un-special-rapporteur-promotion-and-protection-right-freedom-opinion-and.

2. Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech,” Harvard Law Review 131 (April 2018): 1598–1670.

 

Copyright © 2021 National Endowment for Democracy and Johns Hopkins University Press