The Future of Platform Power: Reining in Big Tech

Issue Date July 2021
Volume 32
Issue 3
Page Numbers 163-67
file Print
arrow-down-thin Download from Project MUSE
external View Citation

The technology base upon which democratic societies have relied has transformed rapidly in recent years, with precipitous breakthroughs in telecommunications, data storage, and processing power yielding a new media ecosystem dominated by a select few digital platforms. Empowered by capacities to collect data on a near-uninhibited basis, process that data to generate fine-grained behavioral profiles, and wield those profiles to algorithmically manipulate the individual’s media experience, the dominant internet platforms possess overbearing economic power over the consumer, the harms of which cannot be addressed merely by Fukuyama’s proposed policy requirements for sourcing of content moderation to middleware providers. The purpose of this essay is to clarify the connection between the business model of the digital platforms and the negative externalities they instigate. We conclude by proposing a more holistic view of the policy intervention required to renegotiate the balance of power between corporate and consumer—one framed by consumer privacy, algorithmic transparency, and digital competition.

This essay is a part of an exchange based on Francis Fukuyama’s “Making the Internet Safe for Democracy” from the April 2021 issue of the Journal of Democracy.

As Francis Fukuyama appropriately highlights, the opaque, ubiquitous, and growing influence of internet platforms run by a handful of unaccountable tech firms poses a serious threat to democracy. Even as social-media platforms have emerged as the places where billions of citizens the world over “go” for content, the algorithms that fuel these services have foisted harmful outcomes on their ever-expanding audiences. Engagement-focused and attention-optimizing, proprietary platform algorithms have driven the spread of disinformation, conspiracy theories, and hate speech among billions of users worldwide. Fukuyama is also correct to point out the asymmetric relationship that has arisen between these platforms and their users, with tech companies monetizing news and political content without adhering to either journalistic norms or meaningful democratic-governance standards.

About the Authors

Dipayan Ghosh

Dipayan Ghosh is codirector of the Digital Platforms and Democracy Project at the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy.

View all work by Dipayan Ghosh

Ramesh Srinivasan

Ramesh Srinivasan is professor in the department of information studies at the University of California, Los Angeles.

View all work by Ramesh Srinivasan

Yet Fukuyama’s proposal to overcome these challenges through an open market for content-curating “middleware” contains a deep structural flaw: It assumes that once government regulation establishes such a market, middleware providers that are both willing and able to counter the massive global impacts of attention-seeking algorithms will emerge. In reality, because his proposal fails to address the core workings of the consumer internet’s current business model—where content is algorithmically served to users based on predictions of what will evoke shock, outrage, or other forms of heightened emotional or psychological engagement—there is little reason to believe that his plan will alleviate [End Page 163] that model’s ill effects. Merely regulating who can and cannot operate in the content-moderation market will not stop the socially corrosive set of practices enabled by audience segmentation and targeting: collecting data indiscriminately, profiling behavior, and then designing algorithms to maximize audience engagement. So long as this toxic model remains intact, so too will both the incentives for bad actors to create harmful content, and consumers’ propensity to view it.

We agree with Fukuyama that private, unaccountable power has harmful political and economic effects, and that a democratic transformation is needed to make technology companies accountable to multiple stakeholders. But we believe that addressing in earnest the problems posed by social-media platforms will require regulating all businesses concerned, including big-tech firms as well as other players in content curation, so as to ensure that they are no longer selecting the content they serve with an eye chiefly to hoarding attention. Without regulation to address this structural problem, merely shifting responsibility for content moderation—in other words, outsourcing to third parties the work of managing the “fumes” that big-tech engines generate—will have little impact.

In the years since public attention first turned to the spread of hate speech and disinformation online, social-media firms have visibly stepped up their attempts to filter out harmful content. Nevertheless, such content is still spreading on a vast scale. It is true that these moderation efforts may, in part, be suffering because the internet giants that run them have conflicting incentives when it comes to content that is false or hateful, but also undeniably attention-grabbing. Still, it is unclear as yet whether any content-moderation solution—including the version that might result from a middleware market—can scale to effectively mitigate the threats of disinformation and hate speech that confront billions of technology users around the globe.

In fact, current problems could be accentuated in the middleware scenario: Third-party firms participating in such a market, if these are small or lacking in social-media depth and experience, might find moderation even more difficult than the major internet platforms. Indeed, of all potential participants in such a middleware market, the likes of Google and Facebook would be best equipped with the contextual awareness, talent, and resources to moderate content effectively—and yet even these companies have thus far been unable or unwilling to do so. Moreover, these firms’ efforts to date have involved laying the psychologically distressing work of content moderation on low-paid workers in countries far away from Silicon Valley headquarters. These workers have little power beyond attempting to filter out the most egregious material they may see.

In this context, while democracies may be harmed by the current scenario of oligopolistic firm dominance, there is little guarantee that [End Page 164] marketplaces whose members are still oriented primarily toward profit—rather than the public interest—will restore democratic health. It is true that open competition, enabled by truly open marketplaces combined with public-interest protections, fuels innovation and may be beneficial for consumers and society. And such competition could work well in the social-media sphere if new entrants had the resources of firms such as Facebook and Google, making it possible to overcome the obstacles of both scale and incentive—in other words, if third-party content moderators were equipped to manage the vast breadth of content on a platform such as YouTube while not suffering from the incentive problem that plagues YouTube’s own moderation efforts. But that is not the world in which Fukuyama’s proposal would intervene.

As matters actually stand, there are ample grounds for skepticism that forcing the platforms themselves out of the market for content moderation and opening it to third parties will produce the desired outcome. Even if public-interest organizations not driven by the demands of the attention economy sought to enter the middleware business—a positive development in principle—their ability to do so would hinge on the platforms’ providing open access to their systems, potentially through public application program interfaces (APIs).1 In fact, such a change might be beneficial on a number of levels: Setting the middleware question aside for a moment, allowing third parties to “plug in” to the platforms could have positive implications for research, journalism, and technical accountability. Yet the platform companies have resisted any such move, and there is little reason to believe that they will change course unless regulatory action forces them to do so in ways that go beyond what Fukuyama proposes.

What, then, is the best path forward in addressing the spread of disinformation and other harmful speech online? Policy makers should recognize the exploitive aspects of companies’ behavior-profiling and engagement-maximizing practices, and guard against them by affording platform users greater rights in the face of the industry.

Internet companies such as Facebook and Google relentlessly collect massive quantities of data in order to build profiles of users; leverage content-curation and ad-targeting algorithms to keep users engaged; and use exploitive tactics (such as the selection of “personalized” content designed to cement users’ attention) to grow their businesses around the world at the expense of smaller would-be competitors. In each of these three areas, big-tech firms have near-complete discretion to unilaterally [End Page 165] take any steps they wish. If we were to consider each of these activities as an economic exchange between the internet firm and the individual user, the outcome would reward the firm with the most economic power.

A tech firm can, for instance, determine the quantity and types of data that it wishes to collect; can aggregate such data with other, third-party data gathered from data brokers; and can make self-serving decisions about how long to retain these data. From this baseline it can design its content-curation algorithms however it wishes to maximize engagement; and it can employ any technical means at its disposal to expand its ad networks, explore new avenues for data collection, and partner with or acquire other companies across the web. Against this backdrop, invisible and manipulative practices of the kind that came to light in the Cambridge Analytica scandal—where a third-party firm was found to have harvested the data of Facebook users without consent and exploited them for political microtargeting—are far from exceptional; rather, they are the natural outgrowths of an unaccountable business model.

And it is these same practices that propel the online harms drawing wide concern from policy makers and the public, including disinformation, the spread of hate online, and algorithmic discrimination. For instance, disinformation operators and legitimate political actors alike leverage the behavioral knowledge gathered by platforms and data brokers to target divisive messages with precision, focusing on and widening the thin cracks in democratic societies. In light of this, robust regulatory intervention is needed to restore agency to consumers, through competition, transparency, and perhaps most critically, personal-data rights.

To this end, the most urgent need is for a comprehensive privacy law that allows consumers to decide what data can be collected about them and what attributes can be associated with them, so as to provide a legal foundation for users to gain ownership of and sovereignty over our personal data. For all the concerns that advocates have raised over the ways in which the EU’s General Data Protection Regulation (GDPR) has been enforced in the three years since its inception, our view is that this privacy regime is, as a matter of public policy, theoretically sound. The GDPR affords consumers novel capacities to access data possessed by commercial providers, consent to its collection and retention, and demand deletion (via the right to be forgotten).

In a digital economy that commoditizes consumers’ experience of media and treats their attention along with their data as currency in the online ecosystem, we see such a regime—if appropriately enforced—as mere table stakes: The consumer deserves as much capacity to enforce his or her wishes concerning personal data on the industry as the leading firms have heretofore had to collect and use the consumer’s data at will. Such efforts might also be tied to a broader exploration of policies to address harmful algorithmic impacts, such as the gig economy’s exacerbation [End Page 166] of economic inequalities. Where firms develop opaque algorithms that curate content and target ads in ways poorly understood by typical consumers, requirements for algorithmic transparency and algorithmic audits would shine a light on good and bad practices, enabling experts and critics to push the industry in less harmful directions.2 And where firms employ coercive strategies that crush both competitors and innovation, stronger competition policies would enable regulators to counteract the dominance of the select few.

Regulators can recognize, for example, that the dominant platforms not only are monopoly firms, but are indeed natural monopolies—companies that gain incrementally higher value as they expand. In such cases, as with railroads or electric utilities, it makes economic sense for societies to invest in only one player. If policy makers approach the issue from this angle and consider regulating the platforms as utilities, they may find new political openings for action in the areas we have outlined.

The challenges that our democracies face are not limited to problems of content moderation: They lie in profound asymmetries of information and power. Hidden data collection allows companies to learn intimate details about us while we still do not know what they know, let alone how such information might be used to influence and manipulate our behavior. We have arrived at a moment in which digital interfaces have become the gateway to much of our economic, political, cultural, personal, and global lives. It is high time we consider measures such as fundamental competition reform, third-party–audit requirements, and user ownership of and sovereignty over data in order to ensure that the technologies of today and tomorrow will serve and support us all.

 

NOTES

1. Tom Wheeler, “Using ‘Public Interest Algorithms’ to Tackle the Problems Created by Social Media Algorithms,” Brookings Institution, 1 November 2017, www.brookings.edu/blog/techtank/2017/11/01/using-public-interest-algorithms-to-tackle-the-problems-created-by-social-media-algorithms.

2. Emilee Rader, Kelley Cotter, and Janghee Cho, “Explanations as Mechanisms for Supporting Algorithmic Transparency,” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, April 2018, https://doi.org/10.1145/3173574.3173677; Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).

 

Copyright © 2021 National Endowment for Democracy and Johns Hopkins University Press