The Danger of Runaway AI

Issue Date October 2023
Volume 34
Issue 4
Page Numbers 132–40
file Print
arrow-down-thin Download from Project MUSE
external View Citation

Read the full essay here.

We must reduce harms from current AI systems while also looking ahead to harms that may occur soon. Experts worry that runaway AI could cause extreme harm in the next five to twenty years. The risk is that we develop superhuman AI systems that surpass humans in domains like persuasion, strategy, hacking, and research and development; that we design these systems to pursue goals autonomously; that we accidentally give them unintended goals; and that humans lose control of these superhuman systems. Without regulation, the actions of a small number of elite AI developers could pose massive risks to the rest of society. The risk is not specific to any particular deployment context, but is inherent to the technology itself. So, in addition to regulating specific AI products, we should also regulate the development of frontier AI systems. We should develop safety standards and empower a regulatory authority to enforce them. These regulations would apply only to a small number of frontier AI developers. The risk from runaway AI could emerge very suddenly, especially if advanced AI itself has accelerated the pace of AI progress. If we wait to see the problem before responding, the regulations may come into force too late. So we should regulate proactively, requiring a government license for frontier AI developers.

About the Author

Tom Davidson is senior research analyst at Open Philanthropy, focusing on the potential risks from advanced AI. He has authored numerous reports on the subject, including “What a Compute-Centric Framework Says About AI Takeoff Speeds” (2023).

View all work by Tom Davidson