Harnessing AI Risk: Transforming our Greatest Threat into Humanity's Triumph. (v.2)


Abstract*

In spite of increasing demands from AI scientists and industry leaders for an all-encompassing global governance structure for AI, the proposals and constituent processes currently put forth by nation-states are lacking in specificity, inclusivity, participation and foresee an outsized role for top AI firms.

In a reversal of roles, an AI industry leader like Sam Altman, CEO of OpenAI, called for a global democratic constitutional convention for the global governance of AI akin to the U.S. Constitutional Convention of 1787, while Google DeepMind and prominent AI researchers have outlined an highly detailed feasibility analysis for the creation of four new intergovernmental organizations for AI.

This paper argues that, given what’s at stake, global governance of AI cannot be designed just by leading AI superpowers and firms, but needs to be built via constituent processes that combine the maximization of competence and agility with that of global representation, participation, inclusivity, and multilateralism.

Furthermore, it argues such an approach is crucial to ensure that the resulting institutions will be sufficiently trustworthy and widely trusted to encourage broad adoption and compliance; enhance safety through global diversity and transparency; achieve a fair distribution of power and wealth; and effectively mitigate the risks of global military instability.

In an annex, it describes in fine detail an comprehensive architecture of three IGOs to wholly manage AI risks and opportunities for the global public good, aiming to catalyze a comprehensive, efficient, concrete and timely deliberative discussion:

(1) A Global AI Lab IGO, aimed to achieve worldwide leadership or co-leadership in human- controllable AI, alignment research and AI safety. It accrues capabilities of member states, and distributes dividends and controls to member states and directly to its citizens.

(2) An AI Safety IGO, tasked with enforcing a global prohibition on hazardous AI development outside of this lab. This organization would coordinate with intelligence agencies to prevent misuse of AI.

(3) An IT Security IGO, responsible for developing and certifying radically more secure and trusted IT systems, particularly for control subsystems for frontier AIs and other critical societal infrastructure such as social media, as well as for confidential and diplomatic communications.

*(includes slight grammar corrections, as opposed to the published paper original)


(This paper was published as a citable paper PDF on ResearchGate, but also as a Linkedin article for comments and shares. A v.1 proposal we introduced last June 28th, 2023 at the UN to the Community of Democracies, and its 40 member states.)


Rufo Guerreschi