Submittal to Future of Life Institute's "Call for proposed designs for global institutions governing AI"

The following is a 850-words text is a copy of the salient section of our Harnessing AI Risk Initiative submittal to the “Call for proposed designs for global institutions governing AI"  issued by the Future of Life Institute.

Mechanism

While most realize that strong global coordination is key to manage the immense risks and opportunities of AI, nearly all conclude it is very unlikely to happen due to lack of political will. Perhaps, most heads of state understand the immense win-wins of exiting the global "semi-anarchic default state", as Nick Bostrom describes it, but just do not see a credible time-tested actionable path to get to it. 

That's very justified considering the very poor track record of treaties and treaty-making methods in recent decades for dealing with global challenges like nuclear or climate change. 

But if we look back further, an alternative may provide hope and guidance. In 1786, two US states convened three more in the Annapolis Convention setting out a treaty-making process that led to the ratification of the US Constitution by 9 out of 13 US states. 

A few globally-diverse states and NGOs, could design and jump-start a similar process - globally and only for AI - to design a constituent process, attract a critical mass of states, and jointly convene an Open Transnational Constituent Assembly for AI and digital Communications mandated to create to a new federal intergovernmental organization ("IGO") to build and share the most capable safe AGI or AGIs, and reliably ban unsafe ones. 

The IGO will build and maintain Global Public Benefit AI Lab ("Lab") and ecosystem: an open, partly-decentralized, public-private and democratically-governed joint-venture aimed to achieve and sustain a solid global leadership or co-leadership in "safe AGI" capability, technical alignment research and AI safety measures.  

Inspired also by the Baruch Plan, the Coalition for the International Criminal Court, International Thermonuclear Experimental Reactor and AirBus, such IGO will reliably accrue only powers that cannot be reliably left to states, communities or individuals citizens. 

Risks

By ensuring that Rules for such an Assembly will be carefully-pondered, democratic, expert, timely and participatory - and using battle-tested methods and technologies to foster transparency, actual and widely perceived trustworthiness and participation - the resulting IGO may constitutent a beneficial, federal, democratic and resilient singleton. 

As such, it'll likely constrain AI research and advances to a cautious timescale, and maximize the preservation of a secure world, through constitutional checks and balances, novel and battle-tested socio-technical mechanisms, and other mechanisms, such as prescribing periodic constitutional conventions. 

One of the first tasks of the IGO will be to define, and regularly update, a measurable, arbitrary definition of "safe AGI". This will define levels of catastrophic risks that are acceptable, as proving perfect safety may likely be impossible or severely curtail AGIs future benefits to humanity. It may also decide to have exclusive direct or federated control over one or a few, most-capable "root safe AGI". It will transparently build, and update socio-technical, "governance support" and oversight systems to ensure the acceptable levels of risk are not exceeded.

To mitigate the risk of conflicts with other superpowers, state AI alliances or AGIs, the Assembly - and the resulting IGO - will be designed to remain open to joining on equal terms by all states, and open to convergence with them. Both will remain statutorily ready to compromise by giving substantial more voting weight to AI superpowers, who are invited to join concurrently.

To the same end, such initiative will position itself as one to fill the wide gaps in global representation and democratic participation left by​ global AI governance and infrastructure initiatives by leading states, IGOs and firms - including by the US, China, the EU, the UN and OpenAI's public-private "trillion AI plan" - and become the platform for their convergence.

Benefits

The benefits of safe AGI or AGIs will be derived by the decisions of the IGO, and by those of each state, community and citizen within the constraints set by the current global definition of "safe AGI". While the risks cannot be completely eliminated, the potential benefits for humanity of AGI, under the proposed global governance, could be even bigger than commonly described, resulting in dramatic increases of the happiness levels of human and other sentient beings. 

The Lab will accrue capabilities and assets of member states and firms, and distribute dividends and control to member states and directly to their citizens, all the while stimulating and safeguarding private initiative for innovation and oversight. 

The Lab will be primarily funded via project finance, buttressed by pre-licensing and pre-commercial procurement from participating states and firms. 

To achieve both supply chain autonomy and incentivize co-operation vis-a-vis AI superpowers and other future AI consortia, the Lab will not seek full self-sufficiency of its wider AI supply chain. Instead, it will seek to achieve and sustain a resilient and balanced level of “mutual dependency” with those other powers through joint investments, diplomacy, trade relations, as well as rare or unique industrial assets of member states.

Beyond the material and digital riches that AI may bring us, its greatest potential is by far that of increasing radically the average happiness of nearly all humans and the quality of their interpersonal relations.

The described initiative is ongoing, led by the Trustless Computing Association, headed by the author. For more information, refer to the web pages of the Harnessing AI Risk Initiative, its 1st Harnessing AI Risk Summit slated for June in Geneva, and the document section at the bottom of the Initiative's page.

Rufo Guerreschi