The Future of Platform Power: Solving for a Moving Target

Issue Date July 2021
Volume 32
Issue 3
Page Numbers 173-77
file Print
arrow-down-thin Download from Project MUSE
external View Citation

The Stanford Working Group on Platform Scale’s central proposal was to outsource the moderation of political content from the big platforms—Twitter, Facebook, and Google—to a layer of competitive middleware companies as a means of reducing these platforms’ power over political speech. In this exchange on platform power, Robert Faris and Joan Donovan, Nathalie Maréchal, and Dipayan Ghosh and Ramesh Srinivasan all argue in different ways that middleware would not stem the flow of toxic content, and in certain ways might actually intensify it. What three of our critics do not take into account is the illegitimacy of using either public or private power to suppress this hazard. Our working group’s promotion of middleware rests on a normative view about the continuing importance of freedom of speech. Middleware is the most politically realistic way forward.

This essay is a part of an exchange based on Francis Fukuyama’s “Making the Internet Safe for Democracy” from the April 2021 issue of the Journal of Democracy.

I am very grateful to the Journal of Democracy for hosting this exchange on my April 2021 article “Making the Internet Safe for Democracy,” which summarized the work of the Stanford Working Group on Platform Scale. The comments and criticisms are uniformly thoughtful, and many echo the concerns that motivated our group at the outset.

Our working group is continuing to develop the idea put forth in the article about “middleware” as a solution to excessively concentrated platform power. We hope to have a prototype of a middleware application ready for a public demonstration by the end of the year. Since my previous article was written, our thinking has evolved somewhat regarding both the nature of the problem and potential solutions. I realize that this makes our proposal something of a moving target, but the problem itself has evolved rapidly.

About the Author

Francis Fukuyama is Olivier Nomellini Senior Fellow at Stanford’s Freeman Spogli Institute for International Studies. 

View all work by Francis Fukuyama

Our group’s central proposal was to outsource the moderation of political content from the big platforms—Twitter, Facebook, and Google—to a layer of competitive middleware companies as a means of reducing the platforms’ power over political speech. Three of the four commentaries take us to task for failing to grapple with what their authors regard as the central underlying problem posed by the internet today. That problem is the spread of hate speech, conspiracy theories, fake news, and other toxic content, which has undermined democracies around the world; stoked the rise of extreme-right political forces; and led perhaps a third of U.S. voters to believe that the 2020 presidential election was subject to massive fraud.

Robert Faris and Joan Donovan, Nathalie Maréchal, and Dipayan Ghosh and Ramesh Srinivasan all argue in different ways that middleware would not stem the flow of toxic content, and in certain ways might actually intensify it. Faris and Donovan write that I “effectively dismiss the [End Page 173] impact and threat of bad actors that exploit the openness of online forums to distort and manipulate political debate and suppress participation by historically marginalized voices through harassment and threats” (p. 152).

I do not dismiss this threat: It is the central one affecting U.S. democracy today. This threat is unfortunately out there in American society, driven by a host of political, economic, cultural, and social forces. Technology contributes to the danger, but it is not the sole source of it. What three of our critics do not take into account is the illegitimacy of using either public or private power to suppress this hazard. Freedom of speech is enshrined in the basic laws of every modern liberal democracy, including the U.S. Constitution’s First Amendment. The First Amendment protects the right of citizens to say what they want, including things that are untrue, opposed to democratic values, scurrilous, and so forth, short of active incitement of violence or criminal behavior.

The object of public policy therefore cannot be the elimination of toxic content. Rather, what we have focused on is the artificial amplification of that content by large platforms that have the power to spread it very widely and rapidly, or conversely to deliberately silence particular political voices. Ghosh and Srinivasan would like to stamp out hate speech even on small platforms, but I do not see how this is remotely consistent with the First Amendment. The marketplace of ideas has indeed failed, but it has failed in large measure because the big platforms have the power to distort markets in unprecedented ways by leveraging their possession of user data.

Faris and Donovan assert that my article sides with conservatives in their criticism of the big platforms. This is true only in a contingent sense. The proper design of a democratic institution should not depend on who holds power at a given moment. I do not like toxic right-wing content any more than Faris and Donovan do, and I was happy that Twitter and Facebook de-platformed Donald Trump in the wake of the January 6 assault on the U.S. Capitol. But the underlying power of these private, for-profit platforms to silence a major voice in U.S. politics should be troubling to any supporter of liberal democracy, as it was for figures from German chancellor Angela Merkel to Russian opposition leader Alexei Navalny. Those who want to devolve responsibility for protecting democracy to the platforms need to consider how they would feel if Twitter were controlled not by Jack Dorsey, but by a media tycoon in the mold of Rupert Murdoch or Julian Sinclair Smith. The focus should be on the platforms’ underlying power, and not the identity of those who happen to control them at any given point in time.

It would be more straightforward simply to enact regulations requiring the platforms to filter out bad information or even present more politically balanced coverage, somewhat as the Federal Communications Commission’s old Fairness Doctrine did for traditional media outlets. The EU and individual European countries have adopted or are weighing [End Page 174] digital-service regulations that directly address platforms’ content-moderation decisions, and some countries have also leveraged public broadcasting to rebalance the information space.

We considered this regulatory approach in our Stanford University white paper, but we rejected it as politically unrealistic for the United States at the present moment. Many countries in northern Europe, such as Germany or Denmark, can regulate the internet because they maintain a high degree of social consensus and trust in government. This, unfortunately, is not the situation in the United States today. What would a contemporary version of the Fairness Doctrine look like now, given our degree of polarization? Would media be forced to give balanced coverage to claims that the 2020 election was stolen, a falsehood that is nonetheless believed by perhaps a third of U.S. citizens? Would today’s Congress ever vote to empower the government to suppress questioning of an election’s fairness?

Alternatively, Nathalie Maréchal suggests using privacy law to limit the power of the large platforms, in a manner similar to Europe’s General Data Protection Regulation (GDPR). We strongly agree that the platforms abuse their users’ privacy rights, and derive huge competitive advantages from being able to gather user data in one market and then exploit those data to conquer other markets. The problem that is being solved here is, however, a bit orthogonal to the one that we are addressing. It is not clear that GDPR-type restrictions on data use will really erode the power of the big platforms to amplify or suppress political messages. Facebook and Google are already sitting on huge troves of user data, and so limiting future data collection may actually benefit them at the expense of new competitors. This is not to say that privacy protections are not important, but rather that they do not address the central political threat to democratic outcomes posed by platform scale.

My colleague Daphne Keller’s piece is the only one of the four critiques that recognizes the limits on intervention created by the First Amendment. As she notes,

The First Amendment precludes lawmakers from forcing platforms to take down many kinds of dangerous user speech, including medical and political misinformation. That dooms many platform-regulation ideas from the political left. The Amendment also means that Congress cannot strip platforms of editorial control by compelling them to spread particular messages. That sinks many proposals from the right (p. 169). [End Page 175]

I regard most of Keller’s subsequent comments to be friendly amendments to our proposal. She points to some real difficulties. There is a technological barrier to processing the sheer volume of information that the platforms convey. Could artificial intelligence be leveraged by small middleware providers to filter this mass of content? We do not know. This is related to the business-model problem which my article openly acknowledged: If content curation is complex and costly, how will middleware firms be incentivized to provide these services? The view of our group was that making middleware possible would require regulation forcing the platforms to share a portion of their ad revenue. Finally, Keller points to the problem of protecting privacy. This is a particular issue with Facebook, on which content is shared primarily among circles of friends. How a middleware provider could intervene without getting explicit permission from all members of a circle is not clear, though I would note that this is less of a problem for platforms such as Twitter or Google.

As noted earlier, our thinking has evolved a bit since the original article. In our white paper we were agnostic as to how deeply middleware would penetrate into the platform APIs (application-programming interfaces). At one extreme one could imagine middleware taking over the entire user interface from Facebook and Google, and presenting data from these platforms in a completely different way. A much milder version of middleware would function as an add-on to the content served up by the platforms. It might look similar to the content labels that Twitter has begun to implement, or resemble the service offered by the startup Newsguard, a browser plug-in that provides credibility rankings for different news sources.

We have come to the view that only the lighter version of middleware will be politically feasible. The big platforms already do a great deal of very time-consuming and difficult moderation of nonpolitical content, filtering out pornography, graphic violence, criminal incitement, and the like. This type of content moderation is complex and costly, and needs to remain the responsibility of the well-resourced platforms. Middleware would necessarily have to ride on top of nonpolitical content moderation. (Where the dividing line is between political and nonpolitical is today a fraught question, of course, given the politicization of everything from sports to mask-wearing, and this is an issue that middleware implementations would have to deal with in the future.)

There is another issue on which we have focused only lately: take-downs. [End Page 176] Much of the criticism of the platforms has revolved around their deliberate promotion of toxic information, but they also take down a great deal of politically relevant content. Users are obviously less attuned to these decisions, since they are by their very nature invisible. We have seen a number of studies and have heard a great deal of anecdotal evidence, particularly from outside the United States, of highly questionable platform decisions to take down political content in response to demands by governments or politicians, many of them authoritarian. It is not just Donald Trump who has been silenced: There are numerous reports of platforms suppressing critics of India’s Prime Minister Narendra Modi, or pro-Palestinian posts in connection with the recent conflict in Gaza. Such takedowns are as potentially harmful to democratic discourse as the amplification of other types of content. Would we want to allow middleware not just to label, but to block problematic content, as the platforms currently do? Alternatively, would we want to encourage the platforms not to take down material, but simply to have middleware services label it? This issue needs much further thought.

Our working group’s promotion of middleware rests on a normative view about the continuing importance of freedom of speech. I deplore the toxicity of political discourse in the United States and other democracies today, but I am not willing to try solving the problem by discarding the right to free expression. In addition, middleware is the most politically realistic way forward. Approaches that favor one side of the political divide at the expense of the other—whether proposals from the left for state regulation of toxic far-right content, or proposals from the right for modifying Section 230 of the Communications Decency Act—are unlikely to be politically feasible. Stricter antitrust enforcement may remedy some of the economic harms caused by the large platforms, but it would not directly address the political harms that I have identified. Moreover, antitrust law moves extremely slowly, and even if it eventually led to an AT&T-style breakup of Facebook or Google, this would likely prove ineffective given network economies. A national privacy law is highly desirable in itself, but may actually serve to entrench Google and Facebook rather than weaken them.

Middleware, by contrast, does not obviously benefit one side of the current polarized divide at the expense of the other. Perhaps it might deepen that polarization for some parties, but there is at least a chance of broader agreement on the desirability of this approach. As Daphne Keller argues, there are many problems yet to be addressed in implementing the middleware strategy, but we will not know if they are solvable unless we try.

Copyright © 2021 National Endowment for Democracy and Johns Hopkins University Press

Image Credit: Jim Watson/Contributor