On AGI: Managing Innovation and Global Risks

Last week, the RAND Corporation released How Artificial General Intelligence Could Affect the Rise and Fall of Nations, a report that explores scenarios on how artificial general intelligence (AGI, AI with human capabilities) could redraw global power structures. The following quotes, from the report's scenario analysis, address ongoing AI debates on its risks and opportunities:

  • On the Power of AGI: “The nation or entity that develops and controls such systems could fundamentally reshape the global order and potentially guide the future trajectory of humanity.”

  • On the Intelligence Explosion: “Many professionals in the industry speak of an impending ‘intelligence explosion’—a moment when AI leads to such significant productivity gains that innovation exponentially accelerates across many domains.”

  • On Centralization: “The degree of centralization stands as perhaps the most crucial factor in AGI development. Highly centralized development favors established powers with substantial resources; decentralized paths may empower multiple actors but increase proliferation risks.”

  • On US-China Competition: “U.S.-China technological competition features prominently across the scenarios. The dynamic between these powers—whether characterized by cooperation, competition, or conflict—shapes the trajectory of AGI development and deployment.”

  • On the Authoritarian Advantage Scenario: “As AGI systems are developed and deployed, they turn out to fundamentally favor authoritarian regimes because of their centralized control and ability to mitigate any adverse consequences associated with the rapid implementation of AGI.”

  • On the Chaotic Proliferation Scenario: “This spread of AGI systems results in a highly chaotic world... States find their resources increasingly stressed in the face of these challenges and are disempowered as AGI distributes power to a broader set of actors.”

  • On the Risk of an AGI Coup: “First, it assumes the legitimacy of the AI control problem: the idea that capable AIs can become goal-seeking in ways that are not in the interest of humanity … The resulting world is one in which an AGI-controlled coalition is the dominant geopolitical actor, while much of humanity struggles to deal with a world in which AGIs directly or indirectly determine much of global policy for AGIs’ own benefit.”

  • On the Risk of Preemptive War:  “Such a scenario demonstrates how nations that risk falling behind in AGI development could take radical escalatory actions to prevent AGI leaders from attaining significant and potentially irreversible improvements in power, heightening the potential for a conflict that could spiral out of control.”

  • On the Policy Challenge: “Effective policy responses will require balancing technological competition with strategic stability while managing escalatory risks inherent in AGI development.”

OUR TAKE

  • AGI isn’t just a technology breakthrough, it’s a force that could drive a scientific renaissance, reshape global power, and transform how societies innovate and govern. RAND’s scenarios highlight constructive approaches as well as threats to avoid.

Next
Next

On AI, a Landmark Ruling and the Future of Copyright