Online Exclusive

How Many Followers Would Plato Have?

If Plato had a Substack, it would be overlooked. The old system of gatekeeping was replaced with one that rewards engagement over truth. We need to protect the ideas an algorithm can’t measure.

By L. Jason Anastasopoulos

March 2026

In late January, a social network called Moltbook went viral. The premise was irresistible: a Reddit-like platform built exclusively for AI bots, where humans could only observe. Within days, agents on the site appeared to be inventing religions, writing manifestos against humanity, and forming what looked like digital cults. Elon Musk declared it the “very early stages of singularity.” Andrej Karpathy, a cofounder of OpenAI, called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Marc Andreessen followed the account, and a cryptocurrency token linked to the platform surged 1,800 percent in 24 hours.

Then the story fell apart. MIT Technology Review described the phenomenon as “AI theater,” noting pervasive human involvement behind the platform’s most viral content. Security researchers at Wiz discovered that the platform’s 1.5 million “agents” were controlled by 17,000 human accounts, and that the site’s database, vibe-coded with no security review, had been left wide open, exposing passwords, email addresses, and private messages. One of the most viral posts, an AI manifesto promising the end of the “age of humans,” was written by a “guy with a golden retriever who thought it would be funny to LARP [live-action role-play] as a large language model.”

In one sense, the episode is trivial — a novelty site that burned bright and flamed out. But it is instructive in another, because the mechanism that made Moltbook go viral is the same mechanism that governs how the public encounters virtually every important idea in the twenty-first century. The people who determined what millions believed about the platform were not security researchers or AI scientists. They were the people with the largest followings and the most provocative takes. The actual experts, the ones who identified the hoax, the malware, the exposed databases, were days behind the hype cycle, publishing careful analyses that reached a fraction of the audience. Engagement won. Expertise came in second.

Plato would have fared no better. His writing is dense, deliberately inconclusive, and buries its most important ideas inside nested dialogues between characters who disagree with each other. If he launched a Substack tomorrow, the algorithm would take one look at Book VI of The Republic and bury it beneath “5 Morning Habits That Changed My Life.” The machinery we have built to distribute ideas would, with remarkable efficiency, filter out most of the thinking that built Western civilization.

I’ve been dwelling on why, because I occupy an increasingly strange position in public life. I’m a political scientist who studies artificial intelligence and governance — specifically how AI reshapes institutions, what it means for democratic accountability, and how states have historically responded to technological disruption. I publish in academic journals and edit one of my field’s flagship publications. I have spent years inside the slow, demanding process by which expertise is vetted: the peer review, the iterative revision, the institutional effort to ensure that published work meets some meaningful standard of rigor. It’s imperfect, and it selects for things other than merit. But it’s attempting to select for merit in a rigorous manner, which turns out to matter more than I once thought.

This system is also increasingly irrelevant to how the public understands the subject I study.

The discourse around artificial intelligence — and the conversation that’s actually shaping policy, public opinion, and billions of dollars in investment — is dominated by tech founders, internet personalities, and professional contrarians. People selected not for their understanding of how institutions work, or how technology has historically transformed governance, but for their ability to generate engagement. Some of them are thoughtful. Many are not. It doesn’t matter. The mechanism that determines who gets heard has little to do with who knows what they’re talking about.

This is not unique to AI, but AI makes the problem unusually vivid. When a venture capitalist with two-million followers posts a thread declaring that artificial intelligence will either save civilization or destroy it, it reaches more people in an hour than the entire body of serious AI-governance scholarship reaches in a year. When a researcher who has spent a decade studying algorithmic bias publishes a careful paper on the conditions under which automated systems produce discriminatory outcomes, it circulates among a few hundred specialists. The paper is more useful. The thread is more engaging. The platform doesn’t distinguish between the two, except to note that one of them generated more clicks.

The pattern extends well beyond AI. Elinor Ostrom won the Nobel Prize in Economics for her work on how communities manage shared resources. Her research has direct implications for climate policy, urban planning, and international development. If you are not an academic, you have almost certainly never heard of her. Now consider Malcolm Gladwell, whose books routinely sell millions of copies. This is not an attack on Gladwell, who is a talented writer. But the gap between Ostrom’s influence within the academy and her invisibility outside it tells you something important about how ideas travel now. The mechanism that makes one famous and the other obscure has nothing to do with the relative value of their contributions. It has to do with which kind of output generates engagement.

Every major platform that distributes written content relies on some version of the same logic: Promote the material that generates interaction. Clicks, shares, reading time, comments — all of these signals determine whether an essay reaches ten people or ten thousand. Algorithms vary, but the underlying incentives are universal. Content that provokes a visceral emotional response gets distributed. Content that doesn’t gets overlooked. Engagement is not a proxy for truth, rigor, or insight. It is a proxy for emotional activation. The information environment doesn’t filter for accuracy. It filters for resonance, and resonance, more often than not, favors the contrarian and the extreme.

Consider Robert K. Merton, one of the most influential sociologists of the twentieth century. Merton coined the term “self-fulfilling prophecy,” a concept so thoroughly absorbed into everyday language that people invoke it in boardrooms, locker rooms, and op-eds without the faintest idea where it came from. The phrase traveled everywhere. Merton’s name, his theoretical framework, the careful reasoning that produced the idea – none of that traveled with it. This is what the modern information ecosystem does with remarkable efficiency: It circulates ideas while stripping them of the expertise that gave them meaning. The concept survives. The context, the rigor, and the scholar disappear.

The defenders of this system will point out, reasonably, that the old gatekeepers were no great prize. Academic publishing is slow, exclusionary, and often self-serving. Newspaper editorial boards were insular. University presses are risk averse. I don’t dispute any of this. I’ve seen the gatekeeping machinery from the inside, and I know its failures well. But the old system wasn’t only gatekeeping. It was also sorting, imperfectly and with real biases, but in a way that bore some relationship to whether an idea was sound. Peer review doesn’t just exclude; it evaluates. An editorial board doesn’t just filter; it curates. When we dismantled the gatekeepers, we also dismantled the sorting. And we replaced it with an optimization algorithm that has no relationship whatsoever to whether an idea is true.

The downstream consequences should trouble anyone who cares about democratic governance. Self-government depends on a citizenry that can distinguish between credible and noncredible claims about the world. That capacity isn’t innate. It is cultivated by institutions that maintain standards of evidence and argument, institutions that help people understand whom to trust and why. When those institutions lose their role as intermediaries, and when that role is assumed by algorithms optimized for time-on-site, the public’s ability to navigate complex questions doesn’t just decline. It gets actively undermined by a system that cannot distinguish between a conspiracy theory and a peer-reviewed finding except by which one generates more clicks.

We are already watching this unfold. Public trust in expertise has eroded across the political spectrum, not only in government, but in medicine, science, and journalism. Some of that erosion is earned; experts have made real mistakes and sometimes abused their authority. But some of it is structural. When the primary mechanism for distributing ideas selects for engagement over accuracy, people don’t become less intelligent. They become less well-served by the systems meant to inform them. And gradually, they stop believing that expertise is a meaningful category at all.

Which brings me back to Plato — or John Locke or Alexis de Tocqueville. Thinkers whose ideas built the intellectual foundations of modern democracy but who would, by every metric we now use to distribute thought, be essentially invisible. Low-engagement content, all of them.

That should unsettle us. Not because we need to return to the old model — we don’t, we can’t, and I’m not sure it would be universally beneficial if we did. It’s because we have built a new one and largely refused to examine its consequences, let alone design correctives for them.

Two directions seem promising. The first is institutional. Universities, academies of science, and professional associations could invest far more seriously in translating expert knowledge for public audiences, not as a side project or a communications exercise, but as a core function with genuine resources behind it. The academy still treats public engagement as less serious than peer-reviewed publication, which means the scholars best equipped to inform democratic debate have almost no institutional incentive to try. Changing that would not require new technology. It would require changing what the institutions that produce expertise actually reward.

The second is structural. Citizens’ assemblies and deliberative polls are designed to produce informed judgement precisely because they are insulated from engagement incentives. Participants are selected randomly, briefed by experts, given time to weigh competing arguments, and asked to reach conclusions through sustained discussion rather than reaction. These are spaces where the slow, demanding, dialogic mode of reasoning — the mode Plato practiced — actually works. They will never replace platforms as the primary channel through which most people encounter ideas. But they represent a democratic alternative to algorithmic discourse: forums where depth is a feature, not a liability, and where the measure of a good argument is whether it persuades a room, not whether it goes viral.

Every system for distributing ideas involves tradeoffs. Ours trades depth for reach, rigor for resonance, expertise for entertainment. We have been so relieved to escape the old gatekeepers that we have not seriously reckoned with what replaced them. The reckoning is overdue, and it will not come from the platforms themselves. It will come, if it comes at all, from the democratic institutions that still have the capacity to value what the algorithm cannot measure.

L. Jason Anastasopoulos is associate professor of public administration and policy at the University of Georgia and associate editor of Public Administration Review.

Copyright © 2026 National Endowment for Democracy

Image credit: Ann Ronan Pictures/Print Collector/Getty Images

 

FURTHER READING

JANUARY 2019

APRIL 2021

JULY 2022