Publisher De Gruyter

Available: https://doi.org/10.1515/9783111674995

Human history has been shaped by forces it barely understood. Artificial General Intelligence belongs to this lineage, but it differs in one unsettling way: it does not merely extend human power; it mirrors human cognition itself. This book ‘Global Governance of the Transition to Artificial General Intelligence’ by Jerome C. Glenn, confronts that reality with sobriety. It does not ask whether AGI will arrive, but whether humanity will be ready when it does.

Clarity is the first requirement of responsibility. The book begins there. By distinguishing artificial narrow intelligence, artificial general intelligence, and artificial superintelligence, it removes the haze that often clouds public debate. ANI is already embedded in daily life—diagnosing illness, driving vehicles, generating language. AGI, still absent but plausibly imminent, would learn across domains, rewrite its own code, and act autonomously in novel situations. ASI, more speculative yet no less consequential, would exceed not just individual human intelligence but the combined cognitive capacity of humanity itself.

This progression matters because governance is always a matter of timing. The book’s most effective metaphor—ANI as childhood, AGI as adolescence, ASI as adulthood—captures a simple truth: character is shaped during transition, not after independence is complete. If humanity waits to govern intelligence until it is fully autonomous, governance will be reduced to regret. Responsibility lies not in domination, but in guidance at the moment when guidance is still possible.

What distinguishes this work from alarmist writing is its refusal to indulge either fear or technological inevitability. The danger is not AGI alone, but ungoverned proliferation—many AGIs, created by competing states and corporations, rewriting themselves, interacting, and evolving beyond human comprehension. In such a world, human civilization would not necessarily collapse in flames; it would simply lose relevance. Displacement, the book reminds us, does not require destruction.

Yet catastrophe is not preordained. If managed wisely, AGI could transform medicine, education, climate mitigation, scientific discovery, and even peacebuilding. The moral problem, therefore, is not whether AGI is inherently good or bad, but whether humanity can coordinate restraint in a world structured by rivalry.

This question becomes sharper when viewed through the lens of contemporary geopolitics. As recent analysis in The Diplomat has noted, the United States and China—despite strategic competition—share remarkably similar concerns about advanced AI. Both fear loss of control, unintended escalation, malicious misuse, and systems whose behavior may exceed human prediction. These shared anxieties matter. They suggest that global governance of AGI is not an abstract ideal, but a practical necessity rooted in mutual vulnerability.

The proposed path forward is notably modest: cooperation on safety protocols, shared approaches to testing and evaluation, and mechanisms to verify claims about AI capabilities. These measures do not require trust in intentions or the surrender of strategic advantage. They require only recognition of shared risk. Intelligence that learns without borders cannot be governed within them. Treating AGI purely as a zero-sum race risks leaving all actors improvising in the dark, reacting rather than governing.

Equally important—and often overlooked—is the human condition in which this transition is unfolding. Long before AGI arrives, societies are already struggling to govern the information environment shaped by today’s digital systems. The new global information revolution has produced a world of relentless visual consumption. Across continents, children, youth, and adults have become voracious consumers of symbolic goods—images, short videos, algorithmically curated narratives—streaming endlessly through small and large screens. Addiction is not incidental; it is structural.

Behind these screens operate hidden persuaders: content producers, platform designers, and advertising systems that shape attention, desire, and behavior with remarkable precision. Many consumers lack the capacity to distinguish the nourishing from the corrosive. This is not a personal failure, but a cultural one. Large segments of society have not been equipped—through reading, critical education, or reflective habits—to pick and choose. Influence arrives faster than judgment.

This vulnerability matters profoundly for the governance of AGI. A society already overwhelmed by algorithmic persuasion is poorly prepared to oversee systems far more powerful than those shaping attention today. If humans struggle to govern the symbolic environment of social media, how will they govern entities capable of autonomous reasoning, strategic planning, and self-modification? The transition from ANI to AGI is not only a technical challenge; it is a cognitive and cultural one.

Here, the book’s emphasis on managing transitions rather than endpoints becomes decisive. Just as adolescents require guidance before independence, societies require intellectual resilience before delegating authority to intelligent systems. Without a population capable of critical judgment, global governance risks becoming formal but hollow—rules written faster than they can be understood, norms agreed upon by elites but disconnected from public awareness.

The book’s global orientation strengthens its argument. Drawing on The Millennium Project’s international assessment—an institution long associated with Jerome C. Glenn’s pioneering work in futures research and global foresight—it integrates perspectives from futurists, diplomats, scientists, philosophers, and legal scholars across nearly fifty countries. Glenn’s decades-long contribution to anticipatory governance and participatory futures thinking is evident in the book’s methodical, inclusive approach to managing emerging global risks.

The book avoids the comfort of guarantees. It does not promise control over superintelligence, nor does it suggest that governance will be easy. Instead, it argues that responsibility lies precisely in acting before certainty arrives. The transition from ANI to AGI is not merely a technical phase; it is a moral interval. Decisions made now—about licensing, safety standards, and international coordination—will echo long after human oversight weakens.

In the end, this book is less about machines than about human maturity. It asks whether humanity, confronted with an intelligence of its own making, can respond with foresight rather than arrogance, cooperation rather than reflexive rivalry. Governing the transition to AGI is not an act of fear. It is an act of care.

At the edge of a greater mind, the question is not whether humanity can remain dominant. It is whether it can remain responsible.