The Limits of Authoritarian AI

Issue Date April 2026
Volume 37
Issue 2
Page Numbers 5-17
file Print
arrow-down-thin Download from Project MUSE
external View Citation

Read the full essay here.

AI is often portrayed as a frictionless accelerator of authoritarian control. In reality, AI systems force rulers into an unavoidable calibration dilemma. Any predictive system requires a decision threshold: lowering it creates backlash through collateral repression (false positives), while raising it creates blind spots for genuine threats (false negatives). This structural volatility produces “threshold whiplash”—cycles of tightening and abrupt loosening—exemplified by China. Far from a silver bullet, AI bureaucratizes uncertainty, compelling autocrats to choose which vulnerability to expose. Prodemocracy actors can exploit these vulnerabilities by demystifying algorithmic power, establishing protective norms, and challenging the “panopticon bluff” through strategic resistance.

About the Authors

L. Jason Anastasopoulos

Jason Anastasopoulos is associate professor of public administration and policy at the University of Georgia and associate editor of Public Administration Review.

View all work by L. Jason Anastasopoulos

Jie (Jason) Lian

Jie (Jason) Lian is a postdoctoral research fellow at the Nonviolent Action Lab and a visiting fellow at the Ash Center for Democratic Governance and Innovation of Harvard University.

View all work by Jie (Jason) Lian

Image credit: Costfoto/NurPhoto via Getty Images