Harnessing AI

Risk Initiative


The convergence of a shocking acceleration of AI innovation and unregulated digital communications has brought us to what may be the most critical juncture in human history.

We can still turn the dawn of this new era into the greatest opportunity for humanity, but only if we come together globally like never before to govern its immense risks and opportunities.


The Harnessing AI Risk Initiative is an ongoing effort by the Trustless Computing Association and an emerging Coalition for the Harnessing AI Risk Initiative.

The Initiative is aggregating a critical mass of globally diverse states to jump-start and design an open, expert and participatory treaty-making process for the creation of a new global intergovernmental organization for AI and digital communications that is suitable to reliably manage their immense risks in terms of human safety and concentration of power and wealth, and realize their potential to usher us in an era of unimagined prosperity, safety and well-being.

The Initiative is calling on a few and then a critical mass of globally-diverse states to join in summits in Geneva to agree on the Scope and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications.

Given the inherently global nature of AI’s primary threats and opportunities, the mandate of such an Assembly will need to include the following:

  • Setting global AI safety, security and privacy standards

  • Enforcing global bans for unsafe AI development and use

  • Developing world-leading or co-leading safe AI capabilities via a public-private $15+ billion Global Public Benefit AI Lab and supply chain

  • Developing globally-trusted governance-support systems

The design of such an Assembly will aim to maximize expertise, timeliness and agility, on the one hand, and participation, democratic process, neutrality and inclusivity, on the other, to maximize the chances that the resulting organization will be sufficiently trustworthy and widely trusted to:

  • Encourage broad compliance with future bans and oversight

  • Enhance safety through diversity and transparency in setting standards

  • Ensure a fair and safe distribution of power and wealth

  • Mitigate destructive inter-state competition and global military instability

A key milestone will be the 1st Harnessing AI Risk Summit this November in Geneva and a Pre-Summit Virtual Conference on June 12th.

A Better Treaty-Making Method

Unfortunately, the prevailing treaty-making method - based on unstructured summits and unanimous declarations - has been shown for decades to be very undemocratic and ineffective, as shown by those for climate change and nuclear weapons.

The Initiative will therefore largely replicate on a global basis and only for AI what is arguably history’s most successful and democratic intergovernmental treaty-making model: the one that started with two US states convening of the Annapolis Convention in 1786, then to the approval of a federal constitution via simple majority in the US Constitutional Convention in 1787, and then its ratification by 9 states and then all 13 in 1789.

Even Sam Altman suggested in March 2023 we should have a global US Constitutional Convention for AI

Voting weight in the Assembly will be adjusted primarily according to population size and GDP - also in consideration of the current huge asymmetry in AI capabilities and world power and the fact that 3 billion persons remain and/or illiterate. The emphasis on GDP will be bindingly reduced, in a few years, as the organization will have ensured nearly all are literate and connected. States and superpowers that will join early will have substantial but temporary economic and voting-power advantages.

Momentum and Roadmap

So far, we have onboarded 32 world-class experts as advisors to the Association and Initiative, and over 39 world-class experts and policymakers and 13 NGOs, as participants to its upcoming Summit.

In March, we held meetings with the missions to the UN in Geneva of 4 states, including 3 heads of mission (and ambassadors) and 3 missions' AI and digital domain experts, and we are engaging 3 more. Together, those states, from Africa and South America, have a population of 120 million, a GDP of $1.4 trillion, and sovereign funds of $130 billion.

In early April, we received written interest from the Ambassador to the UN in Geneva of one of the 3 largest regional intergovernmental organizations, aggregating dozens of states. Since December, we have been in extended talks with 3 of the 5 AI Labs about their interest in participating in the Global Public Interest AI Lab.

On April 23rd, we launched the Coalition for the Harnessing AI Risk Initiative around a 400-word Open Call for the Harnessing AI Risk Initiative v.3, open to all individuals and organizations to join (so far we have 22 individuals and 6 organizations)

Next May and June, we’ll be hosting bilateral and multilateral meetings with states, IGOs and AI Labs in Geneva during the UN ITU WSIS (June 10-13th) and the UN AI for Good (May 25-29th), in advance of our 1st Summit this November in Geneva and our Pre-Summit Virtual Conference on June 12th. We are attracting donors to power-charge our Initiative.

Strategic Positioning

The Initiative seeks to fill the wide gaps in global representation and democratic participation left by​ global AI governance and infrastructure initiatives by leading states, IGOs and firms - including the US, China, the EU, the UN and OpenAI's public-private "trillion AI plan" - and become the platform for their convergence.

The Initiative aims to become the key enabler of the call by the UN Secretary-General for an “IAEA for AI.” It aims to build a treaty-making vehicle that has the global legitimacy and representativity that is needed, and his office, agencies and boards are lacking - in line with his clarification that "only member states can create it, not the Secretariat of the United Nations.” The Initiative will eventually constitute a Caucus within the UN General Assembly and later seek approval by the UN General Assembly to become a part of the UN system while retaining full governance autonomy.

As in 1946, when the US and Russia proposed a new independent UN agency to manage all nuclear weapons stockpiles and weapons and energy research via their Baruch and Gromyko Plans but failed to agree, we now have a second chance with AI. We can harness AI's risk to turn it into an unimagined blessing for humanity and set a governance model for other dangerous technologies and global challenges.

Preliminary Designs and Scope of the new IGO

The Initiative is also advancing - in unique levels of detail and comprehensiveness, and with the support of dozens of advisors and experts - a proof-of-concept proposal for the scope, functions and character of such a new intergovernmental organization that match the scale and nature of the challenge.

We group the required functions in three agencies of a single IGO, subject to a federal, neutral, participatory, democratic, resilient, transparent and decentralized governance structure with effective checks and balances:

  • (1) An AI Safety Agency will set global safety standards and enforce a ban on all development, training, deployment and research of dangerous AI worldwide to sufficiently mitigate the risk of loss of control or severe abuse by irresponsible or malicious state or non-state entities.

  • (2) A Global Public Benefit AI Lab will be a $15+ billion, open, partly decentralized, democratically governed joint venture of states and suitable tech firms aimed at achieving and sustaining solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.

  • (3) An IT Security Agency will develop and certify radically more trustworthy and widely-trusted AI governance-support systems, particularly for confidential and diplomatic communications, for control subsystems for frontier AIs and other critical societal infrastructure, such as social media.

Far from being a fixed blueprint, such a proposal aims to fill a glaring gap in the availability of detailed and comprehensive proposals. It aims to stimulate the production of other similarly comprehensive proposals to foster concrete, cogent, transparent, efficient, and timely negotiations among nations leading up to such Assembly and eventually arrive soon at single-text procedure negotiations based on majority and supermajority rule, rather than unanimity.

The Global Public Benefit AI Lab

  • The Lab will be an open, partly-decentralized, democratically-governed joint-venture of states and tech firms aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.

  • The Lab will accrue capabilities and resources of participant states and firms, and distribute dividends and control to member states and directly to its citizens, all the while stimulating and safeguarding private initiative for innovation and oversight.

  • The Lab will be cost at least USD 15 billion, and will be primarily funded via project finance buttressed by pre-licensing and pre-commercial procurement from participating states and firms.

  • The Lab will seek to achieve and sustain a solid “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia, through joint investments, diplomacy, trade relations and strategic industrial assets of participant states - while remaining open to merge with them on equal terms - as detailed in our recent article on the prestigious digital policy journal, The Yuan.

For more information on the Lab, refer to the Global Public Interest AI Lab page.

Learning from History’s Greatest Treaty-Making Success

Nine years after the U.S. Articles of Confederation were enacted in 1781, many U.S. states realised it was far from enough to safeguard both their economy and their security.

Hence, two of them convened three others in the Annapolis Convention in 1786, and decided to design and convene a U.S. Constitutional Convention for 1787, to build a true federation.

There, state delegations agreed by simple majority on a U.S. Constitution bound to come into force if 9 out of 13 states legislatures approved it. In hindsight, it was an astounding success, except only 1 out of 8 adults had voting rights.

A similar process, and for the same reasons, can and should be replicated at the global level for AI—a history-defining technology with immense implications for the economy, safety, security and human nature.

Once we succeed in gathering 7 or more globally diverse states, it will be relatively easy through then to attract dozens more to have a successful "global Annapolis Convention for AI”.

Special Terms for the US and China

The above terms for participation are different for the US and China, as global and AI superpowers. They are welcome to join at any stage, yet their participation will be held in suspension until the other also joins. The first one of those two that joins will temporarily enjoy 30% higher economic and voting power advantages, which will be reduced progressively to 0% over 5 years.

Opportunities

Find below detailed opportunities to join, support or partner with the Harnessing AI Risk Initiative and/or its 1st Harnessing AI Risk Summit this November 2024, in Geneva:

About Us

The Trustless Computing Association is a Geneva-based non-profit with a mission to promote safe, secure and democratic IT and AI by fostering the creation of new intergovernmental organizations, socio-technical security paradigms and technologies. It does so via institution-building initiatives supported by research initiatives, publications, the TRUSTLESS.AI spin-in (closed Sept 2023), and via 11 editions of the Free and Safe in Cyberspace on 3 continents.

Until March 2013, its activities were centered on the Trustless Computing Certification Body (TCCB) and Seevik Net Initiative. Since then, our focus has moved on to the Harnessing AI Risk Initiative aimed at the creation of a new IGO with three agencies to manage AI, including the TCCB and its Summit series. The association is supported by 32 world-class advisors and over 25 partners. See About Us and Team and Advisors page for more.

Full Information on the Initiative in a single PDF

A 63-page Executive Summary of the Harnessing AI Risk Initiative and Summit PDF. (live updated) A copy of the web pages of the Harnessing AI Risk Initiative, its 1st Summit, Pre-Summit, and the Opportunities pages for states, IGOs, donors, NGOs, AI labs and investors in the Lab. Includes also a 6-page chapter on the Global Public Benefit AI Lab.

Other Key Publications, Articles and Posts