The AI Democracy Dilemma

Issue Date January 2026
Volume 37
Issue 1
Page Numbers 32-44
file Print
arrow-down-thin Download from Project MUSE
external View Citation

Generative AI is poised to revolutionize citizen-initiated mechanisms of direct democracy. This article argues that AI functions as a powerful accelerator, lowering historical barriers and making these mechanisms not only cheaper but also more frequent, by automating law-drafting, optimizing mobilization, and enabling hyper-personalized persuasion. However, this efficiency threatens the core conditions of democratic legitimacy. By eroding deliberation, weakening civil society, and corroding trust through synthetic content, AI risks converting citizen-initiated mechanisms of direct democracy from essential “safety valves” into engines of plebiscitarian instability. This essay contrasts a dystopian future of Automated Plebiscites with a preferable path of Augmented Deliberation. To steer toward the latter, the essay proposes a governance roadmap of digital guardrails—including AI watermarking, public-interest AI platforms, and independent algorithmic audits—aimed at ensuring AI augments rather than undermines democratic practice.

Democratic disruption by AI is no longer the fodder of science fiction. Recent electoral shocks across the world can be traced back to AI-driven interference: In Slovakia’s 2023 parliamentary elections, a fabricated audio recording of a liberal candidate discussing vote-rigging circulated days before the vote, potentially swaying a razor-thin contest. In India’s 2024 general election, AI-generated persona bots and synthetic content were deployed at an unprecedented scale to shape narratives about candidates, parties, and policy issues. The 2016 Brexit vote and 2022 Chilean constitutional plebiscite saw similar dynamics.1 These are not futuristic scenarios but early warnings of a rapidly emerging reality in which artificial intelligence is fundamentally reshaping democratic processes.

Now imagine the next phase. In the spring of 2029, a major European nation finds its online public sphere saturated with the hashtag #TakeBackOurCountry. The posts are compelling, personalized, and eerily authentic. They do not look like political ads; they appear to be heartfelt messages from neighbors, shared by friends. Yet behind them lies a sophisticated AI model, financed by opaque networks, that has identified and exploited a deep vein of public anxiety over a slowing economy and rising immigration. This is but one illustration of how AI systems can target and amplify social grievances.2

About the Author

David Altman is professor of political science at the Pontificia Universidad Católica de Chile, project manager for the V-Dem research collaborative, and director of the V-Dem Regional Center for Latin America. He is the author of Citizenship and Contemporary Direct Democracy (2019) and Direct Democracy Worldwide (2011).

View all work by David Altman

Within weeks, the AI in this scenario has not only shaped public opinion but also channeled it into direct action, drafting legally airtight text for a citizen’s initiative proposing the mass deportation of undocumented migrants and a near-total moratorium on new asylum claims, perhaps called the “National Sovereignty Restoration Act.” The AI then orchestrates a flawless signature-gathering campaign, microtargeting sympathetic voters and deploying volunteers with real-time optimization. What once took activists years is now accomplished in days.

The initiative qualifies for the ballot. A democratic tool, designed to give people a voice, has now been weaponized. The ensuing campaign is a nightmare of clarity. Proponents deploy an army of AI-generated personas — concerned mothers, retired soldiers, unemployed factory workers — who debate in flawless local dialects across thousands of parallel online forums, creating an overwhelming illusion of popular consensus. Deepfake audio of a political leader supposedly endorsing the proposal circulates widely before it can be debunked, echoing earlier warnings about the destabilizing potential of synthetic media and “deepfakes” to erode trust and overwhelm fact-checking capacities.3 The opposition — human-rights groups, mainstream parties, and a weakened civil society — is drowned out. It is fighting a hydra of synthetic persuasion.

On election day, the measure passes with 58 percent of the vote. The system worked exactly as its designers intended: Citizens initiated a law, and citizens voted on it. But was this a triumph of direct democracy or its catastrophic failure? The result plunges the country into a constitutional crisis, triggers international condemnation, and sparks violent street protests. The “safety valve” of direct democracy, supercharged by AI, did not prevent an explosion — it caused one. This paradox reflects long-standing concerns that plebiscitary uses of direct democracy can destabilize democratic institutions rather than reinforce them.4

This fictional — but all too plausible — scenario illustrates the central democratic dilemma of the coming age. Generative artificial intelligence, hailed for its potential to revolutionize everything from healthcare to art, is poised to do the same to politics.5 Its most profound impact may fall not on representative elections but on the very mechanisms of citizen-initiated direct democracy, such as popular initiatives and abrogative referendums (that is, referendums on new or existing legislation). Scholars and practitioners have been debating for decades how to make democracies more responsive. A key finding in the literature — and in my own research — is that such citizen-initiated processes can function as crucial institutional “safety valves.”6 By offering formal channels to express grievances and influence policy between elections, these mechanisms can alleviate societal pressures that might otherwise erupt into large-scale unrest or fuel democratic backsliding.

This stabilizing effect, however, is not automatic. It is conditional, hinging on a vibrant civil society and a public sphere capable of informed deliberation.7 This is where AI changes the game entirely: Generative AI will act as a powerful accelerator for direct democracy, making its practice cheaper, easier, and more frequent than ever before, but also systematically undermining the civic foundations that make direct-democracy mechanisms both legitimate and stable. By flooding the public square with synthetic persuasion, fragmenting shared discourse, and overpowering traditional civil society, AI risks creating a system that is more plebiscitary than deliberative, more efficient than legitimate, and ultimately, more destabilizing than stabilizing.8

We are heading toward a future of more direct democracy, but of a poorer, more volatile kind that will ultimately lead to a legitimacy crisis. To avert a dystopian transformation of politics and society, we must first understand how AI changes the mechanics of direct democracy and then develop ways to harness the technology’s power to strengthen rather than hollow out democratic practice.

Engineering the Peoples Will

A scenario like the one described above is not at all fantastical. It would take only a suite of AI tools to systematically dismantle the practical, legal, and cognitive barriers that have historically kept direct democracy a relatively measured, though potent, instrument. And indeed, a revolution in political participation is now underway as generative AI is increasingly being deployed by political actors and advocacy groups in three fundamental political domains: creation, mobilization, and persuasion. By dissolving the filters that have historically limited citizen-initiated direct-democracy processes — namely, high legal and logistical thresholds9 — AI is not merely accelerating direct democracy, it is transforming the institutional conditions under which AI operates, shifting from “conditional safety valves” to potentially destabilizing engines of plebiscitarian politics.

The first and most profound shift is in agenda-setting. Traditionally, launching a citizen initiative required not only public anger but also legal expertise to draft a proposal robust enough to survive judicial review. This was an important filter. No longer. Comparative research has long shown that legal and procedural hurdles — such as judicial review, ballot-access requirements, and technical drafting rules — functioned as de facto barriers, restricting effective participation to well-organized groups with access to professional legal expertise.10 In this sense, the cost of legal literacy itself acted as a gatekeeper.

Large language models can now ingest a country’s entire legal code, constitutional jurisprudence, and existing legislation to produce a perfectly coherent draft law or constitutional amendment in seconds. It only takes a simple prompt: “Draft a citizen’s initiative for [Country X] that restricts asylum applications to a maximum of 10,000 per year, ensuring the text is constitutionally compliant and includes necessary implementation clauses.” The result will be legally sophisticated texts, free of the amateur errors that have often doomed grassroots efforts. The filter of expertise is gone; the barrier to entry, lowered from a team of lawyers to a single motivated individual with a subscription to a premium AI service. This new reality aligns with what scholars describe as AI’s potential to “level the epistemic playing field” between elites and citizens11 as well as the “logic of connective action,” where digital media lowers coordination costs by enabling personalized, networked mobilization.12 AI extends this logic further: Instead of merely reducing organizational barriers, it automates legal expertise itself.

Once a proposal exists, the next hurdle is qualifying for the ballot — a task of logistics and mobilization that requires collecting hundreds of thousands of verified signatures. This is where AI-driven microtargeting and automation create an insurmountable advantage. First, they offer precision voter targeting: AI algorithms can analyze vast datasets — from voter rolls and consumer habits to social-media activity — to identify the citizens most likely to support a cause. AI does more than just find likely supporters; it finds the most easily persuadable ones.13 Second, AI-driven microtargeting and automation enable optimized signature gathering: AI-powered campaign apps can deploy volunteers with surgical precision. Instead of standing on random street corners, these apps direct volunteers to specific neighborhoods, at optimal times, to approach residents with specific demographic profiles. The AI manages the entire field operation in real time, maximizing the efficiency of every volunteer hour — echoing the data-driven campaign techniques that emerged in the early twenty-first century.14

Finally, from these efforts comes the astroturfing feedback loop: As the campaign gains signatures, the AI can generate a wave of synthetic online support — fabricating thousands of believable social-media profiles to share success stories, post videos (using deepfake technology) of “enthusiastic citizens” signing the petition, and create the undeniable feeling of a mass movement. This manufactured momentum often becomes real, as actual people are drawn to what appears to be a spontaneous groundswell of support. This is a sophisticated form of what is known as “AI swarming,” where coordinated synthetic agents can simulate grassroots consensus and manipulate public perception at scale.15 The traditional coalition-building of civil society — knocking on doors, holding town halls, building consensus — is outpaced by a digitally native, AI-driven operation that can simulate grassroots energy and then actualize it on a breathtaking scale.

Finally, AI transforms the campaign itself. The era of one-size-fits-all political messaging is over. Generative AI enables hyperpersonalized persuasion. During a referendum campaign, voters will not see the same few campaign ads as their fellow citizens. Instead, AI systems will generate a unique, persuasive narrative for each voter. A retiree worried about pensions might receive an AI-produced video warning that immigration threatens the sustainability of the social-security system. A young environmentalist might be shown a different clip linking overpopulation to ecological degradation. And a liberal voter might encounter content emphasizing the mistreatment of women and LGBTQ communities by radical conservatives or militant religious immigrants. Each narrative is tailored to the recipient’s personal values and latent biases, making it vastly more effective than any broadcast message. This hyperpersonalization, which replaces common debates with millions of private, manipulated conversations, poses a fundamental threat to the shared public discourse on which democracy depends.16

This is the core of the acceleration: AI makes direct democracy cheap, scalable, and brutally efficient. It solves the historical barriers to direct democracy — expertise, logistics, and persuasion — with terrifying elegance. The “safety valve” is now equipped with a high-pressure, AI-driven pump. The question is no longer whether citizens can launch initiatives, but if the political system can withstand the torrent of proposals and the sophisticated campaigns that propel them.

The Legitimacy Crisis

The AI-driven acceleration of direct democracy is not a neutral upgrade. It is a fundamental transformation that, by solving the problem of logistical inefficiency, exacerbates a far more critical problem: that of democratic legitimacy. Existing scholarship shows that citizen-initiated mechanisms of direct democracy do not automatically enhance stability. Their beneficial function as a “safety valve” is conditional, hinging on a robust civic ecosystem characterized by informed deliberation, a strong civil society, and institutional trust. Generative AI systematically undermines each of these pillars.

To begin with, healthy direct democracy requires a public sphere where citizens encounter competing arguments, weigh evidence, and collectively reason their way toward a decision. AI shatters this model, essentially killing the shared forum. Hyperpersonalized persuasion, as described above, means that there is no longer a common text, shared set of facts, or coherent public debate. One voter is persuaded by a tailored argument about economic strain, for example, while the next will be swayed by a different, AI-generated narrative about another topic, say, cultural preservation. These individuals are not participating in the same conversation but are instead being processed in parallel, isolated informational streams. This fragmentation is the antithesis of deliberation.17

Just as AI’s hyperpersonalized persuasion undermines shared conversations and real deliberation, AI tools that summarize complex legislation into simple paragraphs create a dangerous mirage of public understanding. Citizens may feel informed without grasping legal nuances, trade-offs, or unintended consequences. They end up voting on a simplified shadow of the proposal, mistaking accessibility for comprehension and eroding the quality of the popular verdict. In this new environment, the “will of the people” is not formed through public discourse but engineered via millions of private, algorithmic manipulations. The result is a plebiscite, not a deliberative decision.

The weakening of civil society. The stabilizing function of direct democracy is not automatic. Empirical evidence shows that citizen-driven ballot initiatives, referendums, and the like prevent democratic backsliding and social unrest — but only when embedded in a robust civic infrastructure. Civil society acts as the essential counterweight to keep direct-democracy mechanisms from being captured by elites or populist political factions. AI, however, creates a devastatingly asymmetric arms race that fundamentally undermines this balance. Traditional civil society organizations — unions, NGOs, community groups — operate on the slow, human-paced logic of building consensus, organizing meetings, and mobilizing through trusted networks. They are structurally ill-equipped to compete with an AI that can draft a law, launch a continentwide disinformation campaign, and generate a million persuasive messages in the time it takes a human organization to schedule a press conference.18

When the public square is flooded with AI-generated personas and content, the authentic voices of civil society are marginalized. Their legitimate concerns are lost in the noise, their credibility attacked by coordinated AI swarms. The mediating role of these organizations, crucial for channeling and refining public demands, is bypassed and rendered obsolete. In essence, AI empowers not only “the people,” but also political organizations and well-resourced interest groups best positioned to simulate the people, making it the ultimate tool for civic capture.

The erosion of trust. Finally, the pervasive use of AI eats away at the trust that legitimizes any democratic outcome. When a critical argument in a campaign originates from AI, whom do you hold accountable? When a law’s initial draft was written by a black-box algorithm, how can its intent be debated? These new dynamics introduce a profound degree of epistemic opacity into democratic practice, reinforcing what Stephan Grimmelikhuijsen and Albert Meijer identify as a central threat to legitimacy: diminished public scrutiny and the erosion of meaningful accountability.19 And even if a vote is technically free and fair, the fact that the winning side leveraged a sophisticated, possibly inscrutable, AI persuasion machine will forever cast doubt on the legitimacy of the outcome. Both the losing side and international observers may reasonably question whether they witnessed a genuine democratic expression or a technologically augmented coup.20

In short, the very factors that make AI such a powerful accelerator for direct democracy are the same ones that strip it of its legitimizing foundations. By fragmenting deliberation, overpowering civil society, and undermining trust, AI risks transforming direct-democracy initiatives from safety valves into incendiary devices.

Given AI’s logistical power and the risks it poses, the future is uncertain. The tension between AI-driven efficiencies and their potential to undermine democracy presents two starkly different paths for democracies. The choice between them is not technological but political, hinging on whether we design AI to replace the work of democracy or to augment it. The Table contrasts these two scenarios — the “permanent plebiscite” versus “augmented deliberation” — across key dimensions of democratic health.

Steven Levitsky and Daniel Ziblatt have warned that democracies can die “with the lights on” through the systematic use of formal powers.21 The logic of the “permanent plebiscite” facilitates precisely this route: the continuous submission of decisions to the immediate pulse of the people without institutional counterweights. Hungary’s government-run Voks 2025 consultation on Ukraine’s accession to the EU — used to manufacture a national mandate for Prime Minister Viktor Orbán’s anti-accession stance — illustrates this plebiscitarian drift: Voks 2025 was organized and executed in a matter of weeks. Though officially nonbinding, the vote lent popular legitimacy to the government’s veto position in Brussels, where it was criticized for weak safeguards, the absence of a meaningful cooling-off period, and lack of deliberation.22

There is a better path, though it is one of greater resistance. “Augmented deliberation” — or AI-driven democracy — demands proactive governance, sustained public investment, and strong commitment to democratic principles. Moreover, steering political systems toward this preferable future will require institutional and technological guardrails.

A Realists Roadmap for AI-Driven Democracy

How do we design systems that safeguard the integrity of public discourse without overloading everyday citizens already stretched thin by the demands of daily life? The goal is frictionless trust — individuals should be able to engage confidently with political content without possessing any technical expertise to assess its authenticity. To make this possible, we must relocate the burden of verification from individuals to institutional and technological infrastructures.

The following proposals, designed with real people living under the pressures of the attention economy in mind, are premised on neither the na¦ve expectation of hyperinformed “supercitizens” nor a misguided faith in purely technical solutions to democratic problems. A healthy AI-driven democracy must instead be rooted in institutional and technological ecosystems that realistically accommodate human cognitive limits while still safeguarding the foundations of collective self-government. This approach draws on — and seeks to operationalize — the calls made by the OECD, the EU’s High-Level Expert Group on Artificial Intelligence, and UNESCO for a “trustworthy AI” framework that aligns technological development with democratic values, emphasizing transparency, accountability, and inclusive governance.23

Legislative and regulatory guardrails. First, we must require by law that all AI-generated political content, including images, video, audio, and bulk text, carry a clear, machine-readable watermark. Social-media platforms and news aggregators would then be required to prominently display a label — “Synthetically Generated,” for example — on such content. These measures will ensure that citizens do not need forensic expertise to assess authenticity; the origin of the material will be immediately evident, akin to nutritional labeling on food. Such steps would represent a foundational step toward algorithmic transparency and meaningful public oversight.24

Yet watermarking is not a panacea: Emerging AI models can already erase or alter digital signatures with minimal traceability, underscoring the need for multilayered safeguards — legal, technical, and institutional — rather than reliance on any single line of defense. Even watermarking and provenance tracking may fall short once entire platforms are flooded with synthetic content. In a “dark forest” scenario, the challenge is no longer spotting the fake but discerning whether anything is still human.25 This metaphor captures the deeper epistemic risk of an information ecosystem saturated by synthetic content: When authenticity itself becomes unprovable, the very conditions for democratic trust begin to erode.

Enforced “cooling-off” periods. To counter AI-driven viral surges, we should legally mandate a deliberative buffer period between when a citizen initiative qualifies for the ballot and when it is voted on. This time would be reserved for structured, moderated public debate, enabling countermessaging to emerge and reasoned analysis to circulate in the public sphere. Such measures would reduce the risk of “instant plebiscites” — such as Qatar’s November 2024 constitutional referendum, in which citizens approved the elimination of legislative elections for the Shura Council — and restore temporal space for deliberation. Only a few weeks elapsed between the Qatari emir’s announcement of the ballot “to stop voting” and the vote itself, allowing no meaningful cooling-off period or public deliberation. The reform was approved by 90.6 percent of voters.26

Strict liability for AI-powered astroturfing. The use of AI to carry out large-scale, coordinated phony campaigns should be treated as a serious electoral offense. Entities that fund or deploy AI swarms to manipulate opinion must face strict financial and legal sanctions. Such liability should help to deter potential orchestrators and sponsors of such efforts. Writing and passing legislation, however, typically happens at a glacial pace, especially when compared to the speed of technological innovation. Take Chile’s 1993 Cybercrime Law. It was not until 2022 that the law was updated to include offenses such as phishing, data interception, and digital fraud. And still this long-awaited reform remains largely silent on AI-driven manipulation, coordinated disinformation, and algorithmic accountability.

As John Wihbey argues, democracies suffer from a structural mismatch between the velocity of technological change and the slow rhythm of institutional adaptation.27 Thus lawmakers are often legislating for yesterday’s technology while new forms of algorithmic manipulation are already emerging. The result is a legal architecture forever chasing a technological frontier that keeps racing ahead.

Institutional and infrastructural reinforcements. Rather than expecting NGOs and journalists to fend for themselves, publicly funded or independently supported AI tools should be developed for civic use — what I call public-interest AI armories for civil society. Imagine a trusted platform that offers: 1) one-click fact-checking APIs (application programming interfaces), which are simple software tools that allow news organizations and civic groups to instantly verify the authenticity of claims, images, and viral content with a single request; and 2) automated legislative analysis in the form of plain-language summaries of the pros, cons, and fiscal effects of ballot measures. Such tools “arm” civil society with the ability to counter digital manipulation without requiring every volunteer to be a technical expert.

Other important reforms include establishing “algorithmic ombudsmen,” requiring independent audits of AI systems, and encouraging digital platforms to take steps to slow or limit the spread of false information. Independent public bodies with the authority and technical capacity to audit AI systems used in political campaigns and public administration would function as the digital equivalent of an electoral commission — a trusted third party ensuring compliance with democratic rules. Major digital platforms’ introduction of design features to slow the spread of unverified political content, meanwhile, would make for better-informed voters and reduce the frenzy around campaigns. An example would be user prompts about the validity of political information — brief pop-ups on websites, social-media platforms, or mobile apps that say, “This claim about a ballot measure is unverified. Would you like to see a summary of arguments from both sides before sharing?” Such prompts create moments of reflection without imposing heavy demands on users.

This roadmap acknowledges a basic truth: Systemic, technology-driven problems cannot be addressed with individual, analog solutions. The answer is not to lecture citizens about media literacy, but to construct a digital environment in which truth has a genuine chance to prevail. The objective is to create a democracy in which participation remains accessible and decisions are informed — not by the superhuman effort of citizens, but by the intelligent architecture of institutions.

Reclaiming the Human Core of Democracy

The emergence of generative AI marks a critical juncture in the evolution of democratic governance, with the mechanisms of direct citizen participation affected most profoundly. The seductive promise of AI is a democracy that appears more responsive and efficient — a system in which the people’s will can be translated into law with unprecedented speed and scale. Yet this promise contains a fundamental threat. By supercharging the tools of direct democracy while simultaneously corroding the foundations of informed deliberation, strong civil society, and institutional trust, AI risks producing a system that is all power and no legitimacy.

Thus the greatest danger is not that AI will replace representative democracy, but that it will generate a distorted and destabilizing form of direct democracy. In this scenario, ballot initiatives, referendums, and the like, stripped of their conditional safeguards, become engines of polarization and plebiscitarian rule. This is not an argument against direct democracy, but rather a warning — grounded in empirical evidence — about the specific conditions under which it fails.

The alternative path of “augmented deliberation” is not a technological fantasy but a political project. It requires shifting the focus from using AI to count preferences quickly to using it to improve the quality of public reasoning. The guardrails proposed here are designed for real people, not idealized citizens, with the goal of constructing systems that foster trust rather than demanding constant vigilance from an already fatigued public.

Ultimately, the AI dilemma forces us to confront the most human questions at the heart of democratic government. Will we use artificial intelligence to strengthen democratic judgment — augmenting human deliberation, transparency, and collective self-rule — or will we drift toward an AI-driven political system in which approval is manufactured, public debate is hollowed out, and citizens have no meaningful voice? The former is not an impossible goal that demands universal technical expertise, though it will require a pragmatic shift in mindset from political nihilism to healthy skepticism. That skepticism would be facilitated by clear labels, trusted intermediaries, and transparent systems; citizens would not need to be detectives because they could rely on built-in verification to ask: “Who is behind this? Is this synthetic? What do neutral sources say?” Rather than paralyzing distrust, in this scenario we have a manageable practice of verification. If we reject this path, however, and opt instead to passively hand the reins to AI, then nihilism and dystopia lie ahead — societies where citizens, unable to distinguish truth from fiction, disengage from politics entirely.

Technology does not resolve these dilemmas; it only amplifies their urgency. We must therefore harness the power of artificial intelligence to reaffirm and strengthen the uniquely human capacity for collective self-governance. The future of democracy depends on the wisdom of people to set good rules and successfully steer our collective mindset from nihilism to empowered, sustainable skepticism. Democracy will not be destroyed or saved by algorithms — it will be shaped by how we govern them.

NOTES

1.

    1. Morgan Meaker, “Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy,”

Wired, 

    1. 3 October 2023; Aditya Kalra, Munsif Vengattil, and Dhwani Pandya, “Deepfakes of Bollywood Stars Spark Worries of AI Meddling in India Election,” Reuters, 22 April 2024,

https://www.reuters.com/world/india/deepfakes-bollywood-stars-spark-worries-ai-meddling-india-election-2024-04-22/

    1. ; “Desinformación en el plebiscito: el vacío legal que dejó a 202 denuncias ante el Servel sin ser investigadas ni sancionadas,” CIPER podcast, 21 November 2022,

https://www.ciperchile.cl/2022/11/21/podcast-desinformacion-en-el-plebiscito-el-vacio-legal-que-dejo-a-202-denuncias-ante-el-servel-sin-ser-investigadas-ni-sancionadas/

    1. ; and U.K. House of Commons Digital, Culture, Media, and Sport Committee, “Disinformation and ‘Fake News’: Final Report,” Eighth Report of Session 2017–19,

https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf.

2. Sarah Kreps and Doug Kriner, “How AI Threatens Democracy,” Journal of Democracy 34 (October 2023):122–31; Emilio Ferrara, “Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference,” arXiv preprint 2406.01862 (2024); Daniel Thilo Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” arXiv preprint 2506.06299 (2025).

3. Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (2018); Cristian Vaccari and Andrew Chadwick, “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News,” Social Media+Society 6, no. 1 (2020); Meaker, “Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy.”

4. Hél`ene Landemore, Open Democracy: Reinventing Popular Rule for the Twenty-First Century(Princeton: Princeton University Press, 2020); Nadia Urbinati, Democracy Disfigured: Opinion, Truth, and the People (Cambridge: Harvard University Press, 2014).

5. The debate around AI often falls into two opposing camps: the optimists, who view technological acceleration as a pathway to human progress (see, for example, Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future [New York: Viking, 2016]); and the pessimists, who see it as a force of alienation, surveillance, and cognitive decay (see Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains [New York: W. W. Norton, 2011]). Here, I am trying to plot a middle ground, recognizing AI’s extraordinary potential to enhance participation and deliberation, while warning that without institutional guardrails it risks amplifying precisely the pathologies it promises to cure.

6. David Altman, “Decompressing to Prevent Unrest: Political Participation through Citizen-Initiated Mechanisms of Direct Democracy,” Social Movement Studies (forthcoming).

7. John Parkinson and Jane Mansbridge, eds., Deliberative Systems: Deliberative Democracy at the Large Scale (Cambridge: Cambridge University Press, 2012); David Altman, “Citizen-Initiated Direct Democracy and Democratic Backsliding: A Conditional Institutional Theory” (unpubl. ms., 2025).

8. Kreps and Kriner, “How AI Threatens Democracy”; Stephan Grimmelikhuijsen and Albert Meijer, “Legitimacy of Algorithmic Decision-Making: Six Threats and the Need for a Calibrated Institutional Response,” Perspectives on Public Management and Governance 5, no. 3 (2022): 232–42.

9. John G. Matsusaka, “Direct Democracy Works,” Journal of Economic Perspectives19, no. 2 (2005): 185–206; Shaun Bowler and Todd Donovan, Demanding Choices: Opinion, Voting, and Direct Democracy(Ann Arbor: The University of Michigan Press, 1998); David Altman, Citizenship and Contemporary Direct Democracy (New York, NY: Cambridge University Press, 2019).

10. U.S. Supreme Court, “Buckley v. American Constitutional Law Foundation,” 525 U.S. (1999); Raluca Onufreiciuc and Oana Olariu, “Citizen Law Making vs. Legal Illiteracy,” Logos Universality Mentality Education Novelty: Law 7, no. 2 (2019): 1–9.

11. Christopher Summerfield et al., “How Will Advanced AI Systems Impact Democracy?,” arXiv preprint 2409.06729 (2024).

12. W. Lance Bennett and Alexandra Segerberg, “The Logic of Connective Action: Digital Media and the Personalization of Contentious Politics,” Information, Communication & Society 15, no. 5 (2012): 739–68.

13. Daniel Kreiss and Shannon C. McGregor, “Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle,” Political Communication 35, no. 2 (2018): 155–77.

14. Eitan D. Hersh, Hacking the Electorate: How Campaigns Perceive Voters (New York: Cambridge University Press, 2015).

15. Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy.”

16. Kreps and Kriner, “How AI Threatens Democracy.”; Frederik J. Zuiderveen Borgesius et al., “Online Political Microtargeting: Promises and Threats for Democracy,” Utrecht Law Review 14, no. 1 (2018): 82–96.

17. Borgesius et al., “Online Political Microtargeting: Promises and Threats for Democracy.”

18. Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (New Haven: Yale University Press, 2017).

19. Steve McKinlay, “Trust and Algorithmic Opacity” in Kevin Macnish and Jai Galliott, eds., Big Data and Democracy(Edinburgh: Edinburgh University Press, 2020). See Grimmelikhuijsen and Meijer, “Legitimacy of Algorithmic Decision-Making”; Chile’s experience is illustrative: At least 117 algorithmic systems — some using AI and most using personal data — are currently deployed across public agencies in the country, including those that decide who receives priority healthcare, who fills the last available seat in a public school, and which taxpayer or company should be inspected (see Repositorio de Algoritmos Públicos GobLab UAI, “Repositorio De Algoritmos Públicos. Informe Anual 2025,” Escuela de Gobierno, Universidad Adolfo Ibáñez, 2025).

20. W. Lance Bennett and Steven Livingston, “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions,” European Journal of Communication 33, no. 2 (2018): 122–39.

21. Steven Levitsky and Daniel Ziblatt, How Democracies Die (New York: Crown, 2018).

22. See Keno Verseck, “Orban to Continue Anti-Ukrainian Course After ‘Referendum’” Deutsche Welle, 27 June 2025, https://www.dw.com/en/orban-to-continue-anti-ukrainian-course-after-referendum/a-73063866.

23. Oier Mentxaka et al., “Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks,” arXiv preprint2505.13565 (2025); Levitsky and Ziblatt, How Democracies Die; Organisation for Economic Co-operation and Development, OECD Principles on Artificial Intelligence(Paris: OECD, 2019); European Commission, High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Brussels: European Commission, 2019); UNESCO, Recommendation on the Ethics of Artificial Intelligence (Paris: UNESCO, 2021).

24. Matías Valderrama, María Paz Hermosilla, and Romina Garrido, “State of the Evidence: Algorithmic Transparency,” Gob_Lab UAI May (2023).

25. Maggie Appleton, “The Dark Forest and Generative AI,” https://maggieappleton.com/ai-dark-forest/, 2023.

26. See “Qatar Passes Referendum, Replaces Shura Council Elections with Appointments, Interior Minister Says,” Reuters, 5 November 2024, https://www.reuters.com/world/middle-east/qatar-passes-referendum-replaces-shura-council-elections-with-appointments-state-2024-11-05/.

27. John P. Wihbey, “AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?,” in Democracy’s Mega Challenges: How Climate Change, Migration, and Big Data Threaten the Future of Liberal Democratic Governance (Hartford: Trinity College, 2024).

 

Copyright © 2026 National Endowment for Democracy and Johns Hopkins University Press

Image Credit: Greggory DiSalvo via Getty Images