Time to face defeat, renew and regroup, to come back stronger.

Over the last twelve months, we worked tirelessly to take off the ground our highly ambitious Harnessing AI Risk Initiative and its 1st Harnessing AI Risk Summit in Geneva. 

Relying on part-time volunteer work by our advisors and only one full-time volunteer staff member, myself, we succeeded in creating a compelling and detailed case, and attracting leading NGOs, experts and policymakers to participate in our Summit. 

Yet, while we attracted a varying degree of interest and level of engagement from 7 states and 3 of the top 5 US AI labs, we were unable to convince them to participate in the Summit or join the Initiative, except for one small state.

Hence, we have to admit defeat, and accordingly retreat and regroup. We are forced to  postpone the Summit of June 12-13th to a to-be-determined date. After all, how can you have a summit without states?

We need to draft a new strategy and secure serious funding and a paid team before proceeding. Fittingly with the Spring season, we’ll look to renew our advisors and participants, having less of them but clearly able and willing to publicly advocate for and contribute to an initiative as geopolitically disruptive as ours is. 

We remain convinced that once we fully onboard a few states, then the others will easily join, as nearly all states today remain completely powerless, on their own, in front of a technology that’s clearly slated to upend the economy, sovereignty and the very future of humanity. 

Recent developments make our initiative ever more unique, needed and urgent.  International co-operation initiatives to create a sane and democratic global governance for AIs immense risks and benefits have unequivocally revealed themselves to be extremely weak, undemocratic, slow and largely co-opted by a handful of states, their security agencies and leading firms, with the US and China on top. 

The current global governance model emerging for AI is largely a replica of the one that has managed the risks and opportunities of nuclear power since Hiroshima. After UN Security Council veto-holders failed to agree in 1946 on a middle ground of Baruch and Gromyko plans to centralize nuclear power in a global multilateral organization, a loose coordination of their national security agencies stepped in to fill in for such political failure.

The IAEA was created only in 1957 to complement them, only after all UN veto-holders had built solid nuclear weapon capability, and both superpowers had tested nuclear weapons 1,500 times more destructive than the Hiroshima bomb. We have to be hugely grateful to such agencies if the worst nuclear risks have not materialized, yet the risks are higher today than they ever were. Dangerous proliferation will likely be much harder to prevent for AI than it was for nuclear, so a true multilateral approach will be even more needed to prevent AI’s immense safety risks.

The current treaty-making model has followed the failing one of unstructured summits leading to largely useless unanimous declarations or weak statements of intent, as those for climate change. 

We propose instead the adoption of the most successful and democratic treaty-making model in history, the intergovernmental constituent assembly model which led to the democratic creation of the federal U.S. Constitution in 1787. 

In 1786, two US states convened three more in the Annapolis Convention, setting out a treaty-making process that led to the ratification of the US Constitution by 9 and then all 13 US states. 

A few globally-diverse states and NGOs could jump-start a similar process - globally and only for AI - to design a constituent process, attract a critical mass of states, and convene an Open Transnational Constituent Assembly for AI and digital Communications mandated to create to a new federal intergovernmental organization ("IGO") to build and share the most capable safe AGI or AGIs, and reliably ban unsafe ones. 

Such IGO will need to include at least an international AI Safety Agency, an IT Security Agency and a Global Public Benefit AI Lab and ecosystem. Such Lab will be open, partly-decentralized, public-private and democratically-governed joint-venture aimed to achieve and sustain a solid global leadership or co-leadership in "safe AGI" capability, technical alignment research and AI safety measures.

We have little time. When a sort of AI 9/11 will happen we'll be in emergency mode, which will make a case for a well-thought, sane and democratic constituent process for AI much more difficult. 

While we regroup, to come back stronger than before, we are open to expressions of interest in the opportunities to contribute that we offer by states, NGOs, leading AI firms, donors, and potential new fitting team members, advisors, participants and partners.

Rufo Guerreschi