The Limits of Authoritarian AI

Issue Date April 2026
Volume 37
Issue 2
Page Numbers 5-17
file Print
arrow-down-thin Download from Project MUSE
external View Citation

AI is often portrayed as a frictionless accelerator of authoritarian control. In reality, AI systems force rulers into an unavoidable calibration dilemma. Any predictive system requires a decision threshold: lowering it creates backlash through collateral repression (false positives), while raising it creates blind spots for genuine threats (false negatives). This structural volatility produces “threshold whiplash”—cycles of tightening and abrupt loosening—exemplified by China. Far from a silver bullet, AI bureaucratizes uncertainty, compelling autocrats to choose which vulnerability to expose. Prodemocracy actors can exploit these vulnerabilities by demystifying algorithmic power, establishing protective norms, and challenging the “panopticon bluff” through strategic resistance.

Artificial intelligence is often called a silver bullet that helps authoritarian rulers by making repression cheaper, faster, and more precise. This is not necessarily wrong, at least in the short run. Digital surveillance can chill speech, identify opponents, and help regimes to scale up coercion. But the strongest claim about AI’s political power — the promise of predictive, preemptive, large-scale control — runs into a binding constraint.

About the Authors

L. Jason Anastasopoulos

Jason Anastasopoulos is associate professor of public administration and policy at the University of Georgia and associate editor of Public Administration Review.

View all work by L. Jason Anastasopoulos

Jie (Jason) Lian

Jie (Jason) Lian is a postdoctoral research fellow at the Nonviolent Action Lab and a visiting fellow at the Ash Center for Democratic Governance and Innovation of Harvard University.

View all work by Jie (Jason) Lian

Any system that deals in probabilities must decide how easy or hard it should be for a person, a behavior, or a message to be labeled “risky.” Lower the threshold and the regime reduces false negatives and catches more potential challengers, but it also raises false positives, sweeping more innocents into its net and provoking backlash. Raise the threshold of “risky” and the regime limits collateral repression and protects its legitimacy. But it also misses genuine threats, creating blind spots that opponents can exploit. Put simply, to predict is to err, at least somewhat. The only choice is which type of error the regime is more willing to tolerate.

We call this problem “the autocrat’s calibration dilemma.” It is not a bug that disappears with the next update, more data, or better engineering. It is a structural feature of prediction systems when they are applied to political realities, especially when credible threats are rare, citizens adapt to surveillance, and bureaucrats and party loyalists have incentives to manipulate the metrics.

In the People’s Republic of China (PRC), this dilemma is particularly stark because regulation and review have made calibration a bureaucratic exercise. Since March 2022, the Cyberspace Administration of China (CAC) has required major platforms whose recommendation systems have a capacity to affect “public opinion” or “social mobilization” to file key details and self-assessments in a national algorithm registry, and to update those filings when systems change.1In August 2022, the CAC publicized an initial batch of filings from firms such as Alibaba, Tencent, and ByteDance. Similar rules for “deep-synthesis media” (AI-generated video and speech, for example) and for generative-AI services add identity verification, content-governance requirements, and human review of politically sensitive functions.2

The upshot is that the state does more than observe outcomes; it monitors AI settings: Major updates can trigger security reviews and administrative scrutiny. On paper, this looks like centralized control. In practice, it documents something more revealing — a continual struggle over how much error the regime can tolerate, and whether it is safer to punish too many innocents or to miss too many opponents.

This struggle involves Beijing, local officialdom, and PRC citizens. Local officials are evaluated based on stability-maintenance goals handed down by Beijing. These officials have strong incentives to head off “mass incidents” and petitions, and to game compliance metrics.3 Citizens adapt by wearing masks, using coded language, and moving from public posts to private chats. If the net is cast too broadly and backlash occurs, Beijing steps in to reassert control. It issues new guidance, centralizes authority, and launches damage-control efforts. The result is what we call “threshold whiplash”: cycles of tightening around politically sensitive moments, followed by abrupt loosening, cleanup, and propaganda repair.

The swift rollback of the PRC’s “zero covid” regime in late 2022 offers a vivid example. For nearly three years, local officials had rigidly enforced stringent lockdowns and other controls justified by claims of safeguarding public health. In November 2022, protests against these tight limits erupted nationwide. During these “White Paper” demonstrations, people held up blank placards, cleverly signaling dissent without putting any forbidden content into writing. This made the protests harder for the Chinese Communist Party (CCP) regime to suppress without incurring high costs.

Amid mounting economic and administrative strain, Beijing on December 7 announced the most sweeping easing of restrictions since the pandemic’s outset.4 State messaging then pivoted from “people’s war” toward reassurance and self-care, with senior officials stressing publicly that the severity of covid’s latest variant had “weakened.”5 Yet even as policy loosened, authorities moved to detain or pressure some of those who had taken part in the protests and also suppressed public discussion or memorialization of them. In short, the system tightened things politically while relaxing them administratively.6

This volatility exposes a critical limitation of the surveillance state. AI cannot solve the autocrat’s dilemma, because systems trained on a fearful populace learn from signals that are evasive, censored, and strategically manipulated. What AI does instead is bureaucratize uncertainty by moving the tradeoff into models, thresholds, filings, and security reviews, where it becomes unavoidable, politically consequential, and increasingly visible.

The Autocrat’s Calibration Dilemma

This relocation of uncertainty forces the regime to confront the specific operational costs of statistical error. Despite the promise of precision, AI-enabled repression remains bound by false predictions, and those mistakes carry distinct political penalties. On one side are false positives (Type I errors), where the system ensnares the innocent: a neighbor misidentified by facial recognition, a benign post flagged by filters, or routine movement misread as suspicious. These are not mere statistical annoyances. They create accumulating grievances, drown security services in noise, and teach citizens that compliance is no guarantee of safety. On the other side are false negatives (Type II errors), when the system misses real threats. As algorithms are tuned to reduce noise, genuine challengers slip through the cracks and networks learn to coordinate in ways that the model does not see. In that scenario, AI can give the autocratic regime a comforting but false sense of stability even as opposition activity moves into the shadows.

The dilemma is sharpened by three features of automation at scale. First is the autonomy trade-off. Granting an algorithm wide latitude maximizes speed and coverage, but it also magnifies the consequences of even small error rates. A model running nationwide can produce thousands of false alarms, turning marginal statistical error into a significant political liability. Limiting autonomy by requiring human verification avoids some of that overreach, but it creates a bureaucratic bottleneck. Alerts pile up, analysts triage endlessly, and the speed and scale that made AI attractive in the first place begin to evaporate.

The second problematic feature of AI is algorithmic rigidity. Models are trained on historical data and assume that yesterday’s patterns will predict tomorrow’s threats. But under surveillance, the “ground truth” is constantly moving. Citizens adapt by inventing new slang, changing routes, switching platforms, or moving discussions into private channels. The result is rapid obsolescence: What the model learned last month becomes a poor guide to behavior today. The regime is pushed into continual retraining and recalibration, demanding a level of agility that large surveillance bureaucracies struggle to sustain.

Magnifying these risks is a third feature, the rare-events problem. In any society, active dissidents are scarce, forming just a tiny fraction of society. When an algorithm searches for them, even a highly accurate system will generate far more false alarms than true detections. If a surveillance model is 99 percent accurate but scans a population where only a single citizen in ten thousand is a genuine threat, the vast bulk of those flagged will be innocent. The result is a dangerous feedback loop in which security bureaus are inundated by false “hits,” legitimacy erodes through collateral repression, and real threats can be buried under a mountain of noise.

This calibration dilemma sits within statecraft’s longtime quest to make regimes legible. Modern states have long sought to compress complexity into categories that can be counted, compared, and governed. In the nineteenth century, Adolphe Quetelet promised a “social physics” that would render chaotic populations administrable through statistics.7 James C. Scott later warned that when this desire for legibility combines with “high modernism,” states impose simplified maps of social and political life that inevitably collide with local knowledge and human behavior.8

Digital authoritarianism is this high-modernist dream upgraded for the algorithmic age with continual measurement, real-time classification, and automated interventions. Yet, as Scott noted, the map is not the territory. The more a regime seeks to discern and then act on its citizens’ degrees of “loyalty” and “risk,” the more that regime will stimulate strategic adaptation in response. Citizens learn how to avoid being flagged by the system, and officials tweak the statistics to satisfy their supervisors. The surveillance state can become less informed even as it becomes more measured, trading genuine knowledge for the appearance of control. Nowhere is this trade-off more visible, or more volatile, than in the recent history of official PRC measures against the covid pandemic.

Threshold Whiplash

China has pursued the high-modernist ambition described by Scott with unusual intensity, embedding algorithmic tools into everyday governance and domestic security. The “Health Code” system used during the zero-covid period shows how threshold-based control works in practice. To manage public health at scale, authorities transformed a complex epidemiological reality into a simple color-coded status, typically green, yellow, or red. A change in status could determine whether someone could travel, enter public spaces, or even leave home. In effect, a digital cutoff became a trigger for coercion. And because the cutoff was rigid, the system was primed for whiplash.

Even with substantial technical capacity behind it, including major platforms such as Alibaba, the Health Code regime proved socially brittle. Two failures bred grievances. First, high-sensitivity screening produced false positives. People were coded “red” and quarantined without clear explanation or workable appeal.9 Second, opacity created space for political manipulation, which in turn threatened legitimacy. The clearest instance occurred during the Henan Province rural-bank crisis in 2022.10 Facing a run on banks after a corruption scandal, local officials reportedly prevented depositors from traveling to protest by remotely turning their health codes red. That episode exposed the system not as a neutral arbiter of safety, but as a switch that could be flipped for political convenience.

Over time, the rigidity of AI-enforced control produced not stability but its opposite. Instead of silently chilling dissent, the accumulation of arbitrary constraints and unresolved grievances fueled public anger that spilled into open protest. In October 2022, banners appeared on Beijing’s Sitong Bridge with slogans directly attacking the surveillance state and the “dictator.” The following month came the White Paper protests in multiple cities. Along with anger at lockdown rules, political demands were voiced. The surveillance apparatus, tuned to identify individuals, struggled to contain a mass response driven by shared frustration. Faced with this systemic failure, the center executed a classic whiplash maneuver. Beijing abruptly loosened key restrictions, pivoted state messaging, and moved to memory suppression plus punishment of some participants.

Low-tech efforts to defeat surveillance have included Hong Kong protesters’ use of umbrellas, masks, and laser pointers to frustrate cameras. In Russia, resistance has taken both technical and legal forms. Some activists have used “dazzle” makeup to confuse systems while others have sought legal restitution after being flagged by algorithmic surveillance.11 After being identified by the Moscow subway’s surveillance system during a 2019 solo demonstration, activist Nikolay Glukhin took his case to the European Court of Human Rights (ECHR). In 2023, he won a landmark ruling that mass facial recognition without strict legal safeguards is a violation of privacy rights.12 Russia had withdrawn its recognition of the ECHR in September 2022, but the case was still justiciable (and set a precedent valid in member states) because it dealt with a complaint lodged before the date of withdrawal.

In Iran, the Islamist regime reacted to the “Women, Life, Freedom” protests that began in 2022 by installing AI-driven street cameras to detect women violating the hijab dress code that requires full hair and body coverings. Even before the massive uprising of January 2026, this technology was failing to quash civil disobedience as women kept asserting their rights in public spaces.13 Nor could the cameras prevent millions from taking to the streets across all 31 provinces in the largest protests since 1979. The regime survived not through technological control but through extreme violence: live ammunition, foreign Shia militias, and a death toll that may have exceeded thirty thousand. In each case, AI pushed resistance to adapt but did not quell it.

To be clear, in certain settings authoritarian rule can be strengthened by AI. It will identify known individuals in controlled environments, filter content with predictable cues, and triage leads to streamline review by humans. For purposes of predictive, population-scale policing, AI systems can confirm who someone is when the state already has a suspect, but are far less reliable when asked to sift through millions of ordinary citizens to flag “risky” people when grounds for suspecting a particular individual are unknown. The difference between identifying someone whose image is already on file and making large-scale automated judgments about individuals who may not have any “priors” is a difference not of degree but of kind. This difference also defines the space in which the calibration dilemma becomes hardest for authorities to manage.

Why the Dilemma Persists Even with Better AI

Are these failures merely growing pains? Will future regimes, armed with ever more powerful computing and more sophisticated models, eventually solve the calibration problem? The answer is no. Gains in accuracy have limits imposed not by computing power but by the nature of prediction in a complex society. Even “better AI” cannot remove the tradeoff; it can only shift where it emerges. Three forces ensure that the dilemma will persist.

First, the rarity of dissent means that at a population level, floods of false alarms are guaranteed. Predictive policing systems are looking for needles in haystacks. When the base rate — the underlying prevalence — of the target behavior is extremely low, even a highly accurate model will flag far more innocent people than genuine threats. This pattern is sometimes described as the false-positive paradox. In the political context, false positives impose grave costs in the form of collateral repression and the reaction it brings. A system scanning a million people, of whom only ten are real threats, will if 99 percent accurate catch most of them but will also falsely flag thousands of innocents as “threat risks.” Authorities then must decide whether to ignore the alerts (which means the government is paying for a system it does not use) or chase them and use up scarce resources while alienating the public. No amount of processing power can change this arithmetic of rare events.

Second, the deployment context does not stand still. Citizens adapt. Machine-learning models are built on the assumption that the future will resemble the past. They learn from historical data, but in politics, the “data” fight back. This is a version of Goodhart’s Law, which British economist Charles Goodhart formulated a half-century ago. His observation was that when a measure becomes a target, it stops being a good measure.14 For present purposes, this means that as soon as the authorities start relying on specific signals such as keywords, movement patterns, or social ties, people begin learning to avoid sending these signals. They use coded language, leave phones at home, or revert to analog communication.

Governments may still penetrate these measures, but they make the task of surveillance harder. Detection and evasion become contenders in an iterative contest of measure and countermeasure. The authorities make a change, and people figure out how to hide from it or work around it. Then the authorities adapt, people respond again, and so on. The surveillance model is always being left behind by concept drift: trying to predict behavior that has already changed in response to the prediction itself.

The third and final force keeping the dilemma in place is the way authoritarianism’s political economy distorts inputs. AI is often sold as an objective-truth machine, but in a dictatorship, data is political capital. Lower-level officials know they have to produce numbers that tell the center they are competent. If top leaders demand “stability,” local authorities may suppress alerts or underreport incidents. Similarly, a demand from on high for “vigilance” may spur local officials to inflate threat indicators and thereby put on a show of loyalty and effort. Private vendors who sell the government “black-box” surveillance technology may add more distortion by overstating its reliability and downplaying the real margins of uncertainty that will place demands on human judgment. Having an automated system is no protection against the problem of “garbage in, garbage out,” and may even make it worse by making the garbage output look like hard, authoritative data. Seeing it, leaders will be tempted to mistake measurement for knowledge and think they are omniscient when they are far from being so.

The knowledge problem that has plagued authoritarian regimes for decades is not solved by AI but only masked. By layering probabilistic inference over strategic data and shifting social contexts, these systems can add a veneer of precision to what remains fundamentally uncertain. The dream of a “perfect” police state is not just politically costly, it is mathematically out of reach.

The Panopticon Bluff

If these constraints are real, why does AI still seem so powerful? Admittedly, AI is a potent tool for controlling a populace, but one need not deny that to say its abilities are exaggerated for reasons both perceptual and political. First, it is important to distinguish between what AI can do and what people believe it can do. As suggested by the eighteenth-century English philosopher Jeremy Bentham’s idea for a roundhouse “panopticon” prison in which a single guard can observe any inmate at any time, constant surveillance is less important than the belief that one might be watched at any moment.

When governments and technology firms portray AI as omniscient — possessing “intelligence” far beyond the capacity of humans — they help to cast a chill in which citizens self-censor and comply beyond the state’s actual monitoring capacity not because they are being watched, but because they might be. Techno-optimists promise a “digital dividend” of growth and connectivity from AI. Whatever the truth of that promise, it is true that authoritarian regimes have reaped from AI a “dictator’s dividend” of fear-driven compliance far beyond their regimes’ actual abilities to monitor people. The vague, omnipresent threat of “the algorithm” induces citizens to self-censor, doing the regime’s work for it without a single line of code being executed.

But this dividend is paid for with hidden debt. The stability produced by a panoptic threat is fragile because it relies on what the economist Timur Kuran calls “preference falsification.”15 Citizens feign loyalty while privately harboring grievances, and the resulting measures of compliance can mislead rulers about the true level of discontent. The more the state advertises omniscience, the higher the political cost when that omniscience is revealed to be partial, manipulable, or outright false.

Here also is a moving target. Control over the populace is subject to change as dissidents and the authorities continually adjust tactics. At first, the state often has the edge. Advanced surveillance systems are costly and require levels of centralization and coordination that are the forte of governments. Yet as time goes by, citizens will learn how the systems work, develop countermeasures, and gain broader access to digital tools of their own.

This is where the panopticon bluff can collapse. When citizens witness the system failing, whether through the arbitrariness of a red health code in Henan or the visible limits of camera-based enforcement in Tehran, the mystique evaporates. The “all-seeing eye” is revealed to be a fallible machine that can be fooled by a mask or manipulated by a corrupt local official. Research on “algorithm aversion” suggests that once people observe an algorithm making mistakes, they can become less willing to defer to it, speeding up the bluff’s undoing.16 Once it becomes common knowledge that state surveillance can become clogged with “noise” crowding out “signal” and has gaps in its coverage, fear can quickly turn to anger. The regime is then left with a surveillance apparatus that is overloaded technically and defanged psychologically.

What Pro-Democracy Actors Could Do

AI will reshape repression, but it will not deliver a seamless, all-seeing police state. This technology changes the terrain on which regimes and their opponents compete. Authoritarians may gain advantages in some settings, but those friendly to democracy can still adapt and push back. Three strategies stand out.

First, demystify AI publicly and repeatedly. The psychological power of AI often exceeds its technical capabilities. Clear, accessible explanations of what these systems can and cannot do can puncture the panopticon-like aura that regimes cultivate, weakening the chilling effects of “algorithmic omniscience.” Just as important, democracies and civil society groups should document and publicize failures of AI systems used by authoritarian regimes: arbitrary red health codes in Henan, mistaken identifications, and other visible breakdowns. Exposing the “wizard behind the curtain,” showing that the machine is often guessing, biased, or simply broken, reduces fear and encourages practical countermeasures. These targeted narratives can also embolden democrats by showing that AI-enabled control is neither seamless nor infallible.

Second, build democratic standards and defensive capacity. Democracies and allied civil society groups should push for clearer norms on permissible data access, transparency, and accountability in politically sensitive AI deployments, backed by monitoring, auditing, and inspection where feasible. They should also identify the “red lines” that define when AI is being abused. These include mass biometric identification without safeguards, political blacklists, and coercive scoring or “social-credit” systems meant to raise the reputational costs of violations through legal and institutional channels. If there is access to courts, then strategic litigation may help. Rulings can impose strict safeguards on facial recognition and automated identification, and weaken the “scientific” veneer of algorithmic governance by reframing it as arbitrary persecution.

Initiatives to encourage the democratization of technology can also yield substantial benefits. This includes funding research on how to democratize AI and other secure civic technologies such as end-to-end encryption, circumvention tools, and privacy-preserving protocols. In too many places, repressive regimes have a monopoly on advanced capabilities. That should change. Finding ways to use advanced technologies, including AI, to promote rather than inhibit democracy can help civil society contest repression, strengthen organizational capacity, and realize the potential embedded in these tools.17

Lastly, AI-enabled repression should be brought within the scope of existing human-rights enforcement. Existing international human-rights regimes often rely on observable violations to trigger international pressure. Yet AI is being used now for preventive repression — identifying, deterring, or incapacitating potential challengers before they reach the point where visible repression needs to be used against them. This “before the fact” aspect of AI strains conventional human-rights protocols. The manner in which rights abuses and repression are defined needs attention, and AI needs proactive scrutiny of its role in suppressing dissent. The world needs a sustained, institutionalized effort to monitor and report AI-enabled coercion. Abuses enabled by digital surveillance and profiling can be “named and shamed,” and suitable sanctions applied in response. Paying attention to where advanced microchips are being exported is a good idea too.

The High-Modernist Dream

With its formidable capabilities, AI surveillance tempts autocrats with the final high-modernist dream of a society that can be modeled, predicted, and managed like a simulation. The prospect of AI-powered digital authoritarianism has therefore alarmed democrats worldwide. Yet, as the experiences of China, Russia, and Iran suggest, the dream of predictive, population-scale control is partly a mirage. By replacing human judgment with rigid digital thresholds, regimes trade flexibility for brittleness. They may become better at identifying individual targets, even as they lose the capacity to sense broad social shifts before they erupt.

The limits of AI-enabled repression stem from both the statistical properties of prediction and the classic dictator’s dilemma. Because models are probabilistic and genuine threats are rare, adaptive, and strategically concealed, there is no “perfect” algorithm for preventive repression. Inevitable errors, especially false positives, carry political costs in the form of misdirected resources, mounting grievances, and eroded legitimacy.

At the same time, AI can intensify the dictator’s dilemma by amplifying bureaucratic incentives to game the system. When local officials are punished for “instability,” they have strong reasons to suppress bad news, inflate performance, and manage metrics before information reaches the center. Central leaders then receive metrics that signal order even as disorder grows. The more a regime governs by quantified risk scores, the closer it comes to using a map that no longer matches the terrain.

This is the core vulnerability of the automated police state. It can produce the appearance of precision while degrading the real feedback that rulers need. Democracies look messy — full of contestation, protest, and visible friction — but that messiness often functions as information. It exposes mistakes, reveals grievances, and creates channels for correction. Authoritarian AI, by contrast, seeks to suppress noise in favor of calibrated signals that can be filtered, massaged, and misread. The results may look neat but leave rulers dangerously insulated and prone to overconfidence.

Despite the room for prodemocracy resistance that remains under AI-enabled repression, translating AI’s inherent weaknesses into concrete democratic gains remains a major challenge. A central obstacle is the imbalance of capabilities that favors regimes over dissidents and civic activists in repressive settings, driven largely by the former’s resource advantages. For now, the best strategic responses will be those noted above: demystifying what AI can and cannot do, advancing enforceable standards and norms for political uses of AI, investing in secure civic technologies, and strengthening transnational scrutiny of AI-enabled coercion.

That is modestly good news. To get a sense of the bad news, consider three concerning trends that may make AI-enabled authoritarianism harder to stop. First, AI development and deployment are not the subject of political consensus or robust governance and legal standards. The PRC and the United States, the two leading AI powers, approach the technology mostly through a competitive national-security lens. Neither has prioritized building a global framework that constrains coercive or repressive uses of AI. In Europe, the AI Act of 2024 (a regulation adopted by the EU that has the force of law in all its member states) is a notable step as a comprehensive attempt to regulate AI development and applications. Even so, meaningful restraint will require broader transnational collaboration and buy-in across states, firms, and civil societies.

Second, the world of artificial-intelligence development remains a small one, with much power concentrated in few hands. On the hardware side, just a handful of firms, most prominently NVIDIA, dominate the production of advanced chips and the surrounding software stack. On the model side, likewise, a tiny band of companies produce and control cutting-edge AI. As long as such tight concentration is the order of the day, efforts to democratize AI will suffer. Democracy-friendly civil society groups, for example, need more access to resources if they are to have any hope of narrowing the technological advantages that autocrats currently enjoy.

Third, the “open versus closed” AI landscape currently poses a practical dilemma for democracy’s friends. Many widely available open models are developed by firms based in authoritarian contexts, while U.S.-based leaders often provide the most capable systems through closed-source platforms. This is not to say that Chinese firms’ open models, such as DeepSeek or Qwen, cannot be used for democratic purposes. But when development processes are opaque, especially with respect to training data used, fine-tuning, and embedded guardrails, it becomes hard to audit these systems for bias, hidden constraints, or other vulnerabilities. Closed models controlled by tech giants create different risks. Tools built on these platforms are vulnerable to sudden shifts in price or access. Moreover, forcing activists to send data to remote corporate servers endangers those operating under the eye of a repressive state.

Entering the AI era is neither a calamity nor a guarantee of democratic renewal. As with earlier technological breakthroughs, the use of AI for repression will be shaped by existing political institutions and power relations, even as it introduces new challenges and opportunities. For defenders of democracy, the task is not to be paralyzed by AI’s capabilities, but to understand how these tools operate in specific contexts and to develop effective countermeasures grounded in that understanding to protect civic space, reduce fear, and keep open the possibility of democratic contestation.

NOTES

1.Rogier Creemers, Graham Webster, and Helen Toner, “Translation: Internet Information Service Algorithmic Recommendation Management Provisions—Effective March 1, 2022,” DigiChina (Stanford University), 10 January 2022, https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022.

2. Josh Ye and Brenda Goh, “China Regulator Says Alibaba, Tencent Have Submitted App Algorithm Details,” Reuters, 15 August 2022, www.reuters.com/technology/china-regulator-says-alibaba-tencent-have-submitted-app-algorithm-details-2022-08-12; “Provisions on the Administration of Deep Synthesis Internet Information Services,” China Law Translate, 11 December 2022, www.chinalawtranslate.com/en/deep-synthesis.

3. Yuhua Wang and Carl Minzner, “The Rise of the Chinese Security State,”China Quarterly 222 (June 2015): 339–59.

4. Bernard Orr, “China’s Rigid Zero-COVID-19 Policy Starts to Thaw,” Reuters, 7 December 2022, www.reuters.com/world/china/chinas-rigid-zero-covid-19-policy-starts-thaw-2022-12-07.

5. Ryan Woo, “In COVID U-Turn, China’s Message to the People Shifts from War to Self-Care,” Reuters, 14 December 2022, www.reuters.com/world/china/covid-u-turn-chinas-message-people-shifts-war-self-care-2022-12-14.

6. Minxin Pei, “The Sudden End of Zero-Covid: An Investigation,” China Leadership Monitor 75 (Spring 2023), www.prcleader.org/post/the-sudden-end-of-zero-covid-an-investigation.

7. Kevin Donnelly, Adolphe Quetelet, Social Physics and the Average Men of Science, 1796–1874 (London: Routledge, 2015).

8. James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven: Yale University Press, 2020).

9. Lu Keyan, “Hidden Worries Behind the Health Code: Users Quarantined Due to China Mobile Positioning Errors” [in Chinese], Jiemian News, 12 March 2020, https://finance.sina.cn/2020-03-13/detail-iimxyqvz9892557.d.html?vt=4.

10. Vincent Ni, “Protest in China over Frozen Bank Accounts Ends in Violence,” Guardian, 11 July 2022, www.theguardian.com/world/2022/jul/11/china-violent-clashes-at-protest-over-frozen-rural-bank-accounts.

11. “Moscow Activists Protest Widespread Facial Recognition with Face Paint,” Moscow Times, 7 February 2020, www.themoscowtimes.com/2020/02/07/moscow-activists-protest-widespread-facial-recognition-with-face-paint-a69205.

12. Jomart Joldoshev, “Glukhin v. Russia: The European Court of Human Rights’ First Step into the Age of AI Surveillance,” Federal Bar Association, 23 September 2025, www.fedbar.org/blog/glukhin-v-russia-the-european-court-of-human-rights-first-step-into-the-age-of-ai-surveillance.

13. Emily Blout, “Resisting Iran’s High-Tech War on Women Three Years After Mahsa Amini’s Death,” Stimson Center, 16 September 2025, www.stimson.org/2025/resisting-irans-high-tech-war-on-women-mahsa-amini.

14. Goodhart formally stated it as follows: “Whenever a government seeks to rely on a previously observed statistical regularity for control purposes, that regularity will collapse.” Charles A.E. Goodhart, “Problems of Monetary Management: The U.K. Experience,” Papers in Monetary Economics, vol. 1 (Sydney: Reserve Bank of Australia, 1975). The pithier and more famous formulation of the law (“when a measure becomes a target, it ceases to be a good measure”) comes from Marilyn Strathern, “‘Improving Ratings’: Audit in the British University System,” European Review 5 (October 1997): 308, https://archive.org/details/ImprovingRatingsAuditInTheBritishUniversitySystem/mode/2up.

15. Timur Kuran, Private Truths, Public Lies: The Social Consequences of Preference Falsification (Cambridge: Harvard University Press, 1995).

16. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General 144 (February 2015): 114.

17. Erica Chenoweth, ed., “How AI Can Support Democracy Movements: Summary Report of a Research and Practice Workshop,” Ash Center for Democratic Governance and Innovation, Harvard Kennedy School, 2025.

Copyright © 2026 National Endowment for Democracy and Johns Hopkins University Press

Image credit: Costfoto/NurPhoto via Getty Images