At a Glance
Who: Representatives of a critical mass of globally-diverse set states and IGOs and leading AI labs, as well as leading NGOs and experts that are members of the Coalition for the Harnessing AI Risk Initiative, led by the Trustless Computing Association - is convening a globally diverse set of states, IGOs and leading AI firms.
When & Where: The 1st Harnessing AI Risk Summit will be held in TBD date November 2024 in Geneva, preceded by a Pre-Summit Virtual Conference on June 12th, 2024.
Aims of the Summit:
Achieve preliminary agreement among a critical mass of globally-diverse states for the design of a timely, expert, multilateral and participatory treaty-making process for the creation of new an open global treaty-organization to jointly build and share the most capable safe AIs, and reliably ban unsafe ones. The Summits will largely replicate, globally and for AI only, history’s most successful and democratic intergovernmental treaty-making process - the one started by 2 US states convening the Annapolis Convention and ended with the US Constitution when 9 out of 13 states ratified it.
More specifically, agree on the Scope and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications that are sufficiently participatory, resilient, inclusive and expert to expectedly lead to an intergovernmental organization that will reliably and sustainably foster the safety, wellbeing and empowerment of all, for many generations to come.
Achieve preliminary agreement among states, AI labs, investors, funders and technical partners on their participation in a democratic, partly-decentralized public-private Global Public Benefit AI Lab and ecosystem.
Aims of the Pre-Summit Virtual Conference (June 12th, 2024):
Consolidate and expand a Coalition for the Harnessing AI Risk Initiative, made up of geographically-balanced or neutral NGOs, experts, personalities and former public officials - to expand the momentum and credibility of the Initiative vis-a-vis states and regional IGOs.
Agree on a versio Open Call for the Harnessing AI Risk Initiative (v.4), and other documents of the Initiative.
Produce and disseminate such calls, testimonials, articles, publications and videos to promote, explain and advocate for the Initiative.
Participants - Pre-Summit
Globally-diverse or neutral NGOs, experts and former officials and diplomats. (We expect many of the 39 individuals and 13 organizations confirmed as speaking participants to the Summit in its original date of June 12th, will participate in the Pre-Summit.)
Participants - Summit
States representatives from missions to the UN, foreign ministry of security agencies. (We have been engaging with 7 states' missions in Geneva and one IGO)
Leading AI labs (We have received initial interest by 4 of the top 5 AI labs)
Globally-diverse or neutral NGOs, experts and former officials and diplomats. (We expect many of the 39 individuals and 13 organizations confirmed as speaking participants to the Summit in its original date of June 12th, will participate in the Pre-Summit.)
Agenda - Summit:
Day 1 will mix 40-minute panels and 5-10 minute “lighting talks” by top experts and NGOs. Day 2 will host a wide mix of deliberative working sessions, one-way and two-way educational sessions, multilateral and bilateral meetings. See Detailed agenda below.
Agenda - Pre-Summit:
15.30 - Online Panel:
AI Risks and opportunities: the prevailing science.
16.00 - Online Panel:
Treaty-making for technological risks: nuclear, bioweapons, encryption, climate
16.30 - Online Panel:
Treaty-making for AI: the open intergovernmental constituent assembly model
17.00 - Online Panel:
Mitigating the risks of competing AI coalitions, AIs and AI governance initiatives.
17.30 - Online Panel:
Foreseeing and navigating complex socio-technical future AI scenarios
18.00 - Online Panel:
Open Call for the Harnessing AI Risk Initiative (v.4)
Our Greatest Risk and Opportunity
The alarm has sounded for the immense risks posed by AI, along with its great opportunities.
Since hundreds of AI scientists, including two of the top three, stated last March that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war", awareness of AI safety risk has been skyrocketing.
Twenty eight states, accruing to 80% of the world population, acknowledged in the Bletchley Declaration such safety risks, including "loss of control". Over 55% of citizens surveyed in 12 developed countries were "fairly" or "very" worried about "loss of control over AI". At an invitation-only CEO Summit at Yale last June, 42% of CEOs surveyed said they believed AI has the potential to "destroy humanity within the next five to 10 years."
The risks of Al leading to extreme unaccountable concentration of power and wealth - including via misinformation, surveillance, manipulation, oligopolies and biases - is just as important and urgent, and its awareness appears just as widespread among states and citizens.
Frontier AI capabilities are expected to keep expanding five to ten fold annually. And that’s based on growth in investments and computing power alone, without accounting for AI's increasing ability to self-improve and multiply the productivity of its developers. A break-neck AI arms race among nations and firms is unfolding.
Meanwhile, seven years after the Cambridge Analytica scandal and ten after the Snowden revelations, social media and sensitive communications are ever more vulnerable to abuse and control by unaccountable entities, stifling fair and effective dialogue, within and among nations, at a time when it is most needed.
Investments in AI and AI infrastructure are exploding. If successful, OpenAI’s public-private $7 trillion AI plan to aggregate states, funders, chip makers and power providers will either (a) create an entrenched dominant global oligopoly under US control, or else (b) possibly become the seed of a safe and democratic global governance of AI that Altman has been consistently calling for - as we argue in this article in The Yuan.
If we manage to avert catastrophic risks for safety and concentration of power, by creating proper global AI governance institutions, the benefits of human-controllable and humanity-controlled AI will be astounding in terms of abundance, peace, safety and wellbeing.
The potential “AI pie”, if we avoid the immense risks, is so enormous that rich states and people can get richer while the poor can be much better off. But success inevitably requires a fair distribution of the power in shaping our collective future in this Digital and AI Age.
As in 1946, when the US and Russia, with their Baruch and Gromyko Plans, proposed a new independent UN agency to manage all nuclear weapons stockpiles and weapons and energy research - but failed to agree - we now have a second chance with AI. We can harness the risk of AI to turn it into an unimagined blessing for humanity, and set a governance model for other dangerous technologies and global challenges.
The need for a better Treaty-Making for AI.
The agenda of states tech diplomats is jammed with summits this year for the global governance of AI, as part of initiatives by states or IGOs. These include the 2nd and 3rd AI Safety Summit in Paris and Seoul, the UN Summit of the Future with its Global Digital Compact, Council of Europe treaty on AI, and AI governance initiatives by the G7 and G20.
Other key multilateral meetings will likely be held behind-the-scenes around the Guidelines for Secure AI System Development lead by the US and UK national security agencies, and OpenAI’s proposed “7$ trillion AI public-private consortium”.
Yet, all of these severely lack in representativity, inclusion, participation and timeliness.
Leading digital and AI superpowers appear locked in a a reck-less arms race - economic, military and geopolitical - over AI and AI chips, seemingly intent on hegemonizing or at best eventually splitting control.
Meanwhile, most other nations individually lack the political strength and strategic autonomy to table more democratic constituent process to safeguard their economy, sovereignty, and safety in such all-important domains.
Existing Intergovernmental organizations - like the G7, G20, the EU, the UN, Council of Europe, OECD, GPAI, G77 - are structurally unable to lead a democratic global constituent process for AI governance. That’s due to their lack of a mandate, lack of representativity, their closed membership and/or their statutory over-reliance on unanimity decision-making. Hence, their initiatives severely lack in multilateralism, detail, timeliness, breadth, transparency and global inclusivity, and most controlled by a handful of states.
The prevailing constituent methods of treaty-making being utilized are bound to result in severely weak, fragile and undemocratic treaties - as they did largely in past decades - due to their reliance on loose, undefined, unstructured processes, over-reliant on unanimity, that have enabled a handful or a single state to greatly and unduly influence, distort, water down or stop the process.
A Better Method for AI Treaty-Making
Hence, there is an historical opportunity for a small number of states and NGO to lead the way by utilising - globally and for AI only - history’s most successful and democratic intergovernmental treaty-making process, the intergovernmental constituent assembly method that led from the initiative of two US states to the ratification of the US Constitution by all 13 in 1787 (as argued in this blog post)
The Summit aims to be a first key step in enabling a critical mass of globally-diverse states to design and jump-start an open and democratic global constituent process for AI, as sketched in the Harnessing AI Risk Initiative, starting from for a single small states, as Trinidad and Tobago did in the 90s with the Coalition for the International Criminal Court.
Agenda - Summit
DAY 1
Each session will entails one main video-recorded track, and possibly two more secondary ones:
08.30 - 09.00: Welcome and Introduction: by Trustless Computing Association & local, national and/or international authorities.
09.00-09.10. TBD Lightning Talk
09.10 - 09.45:
AI Risks: Extreme and Unaccountable Concentration of Power and Wealth (democracy, Inequality, civil rights, biases and minorities, unemployment and loss of agency). Human Safety Risks (loss of control, misuse, accidents, war, dangerous science). Risks’ comparative importance and timelines, shared mitigations, win-wins and synergies.
10.00-10.10
TBD Lightning Talk
10.10 - 10.45:
AI Opportunities: Abundance, Health, Safety, Peace, Happiness. Can future AI not only bring amazing practical benefits but even increase very significantly the average happiness and wellbeing of the average human?
11.00-11.10
TBD Lightning Talk
11.10 - 11.45:
AI Scenarios 2030+: (a) Mostly Business as Usual; (b) Global autocracy or oligarchy; (c) Human Safety Catastrophes or Extinction; (d) AI Takeover: Bad and Good Cases; (e) Humanity's Federal Control of Advanced AI.
12.00-12.10
TBD Lightning Talk
12.10 - 12.45:
Preliminary Designs: Federalism & Subsidiarity (global, nation and citizen levels). Checks and Balances. Complexity, Urgency, Expertise, and Acceleration. Transparency, participation, trustlessness and decentralization. Political, technical and future-proof feasibility of bans of unsafe AI. Win-wins for oversight, public safety, civil liberties and democracy. Democracy & monopoly of violence. Role of superpowers, firms and security agencies.
14.00-14.10
TBD Lightning Talk
14.10 - 14.45
Scope and Functions: An AI Safety Agency to set and enforce AI safety regulations worldwide? A Global Public Interest AI Lab, to jointly develop, control and benefit leading or co-leading capabilities in safe AI, and digital communications/cloud infrastructure, according to the subsidiarity principle? An IT Security Agency, to develop and certify trustworthy and widely trusted “governance-support” systems, for control, compliance and communications? Other?
15.00-15.10
TBD Lightning Talk
15.10 - 15.50
Constituent Process: Participation. Expertise. Inclusiveness. Weighted Voting. Global citizens’ assemblies. A Global Collective Constitutional AI?. Scope and Rules for the Election of an Open Transnational Constituent Assembly. Interaction with other constituent initiatives.
16.00-16.10
TBD Lightning Talk
16.10 - 16.50
Global Public Interest AI Lab: Viability. Decentralization vs Safety. Subsidiarity principle. Initial funding: project finance, spin-in or other model? Role of private firms. Business models. Safety accords with other leading state private AI labs. The Superintelligence/AGI “option”.
17.00-17.10
TBD Lightning Talk
17.10 - 17.50
Setting AI Standards: Technical, socio-technical, ethical and governance standards for the most advanced AIs. Agile, measurable and enforceable methods to assess AI systems, services and components that are safe and compliant.
DAY 2
The second day of the Summit will entail:
To-be-determined close-door closed and open workshops, working session and self-organized meetings, whereby states and other participants will engage in fostering consensus on key documents detailing the constituent process, and preliminary designs of the resulting IGO.
Several educational sessions on the technical and non-technical aspects of advanced AI safety, security and privacy and governance. Mainly geared towards state representatives, and run by leading expert NGO participants.
Summit Speaking Participants
Individuals
Confirmed:
To-be-confirmed: (The following were confirmed for June 12th. They will be requested for confirmation for the new November 2024 date once it will be set!)
Rufo Guerreschi, President of the Trustless Computing Association (TCA).
Ansgar Koen, Global AI Ethics and Regulatory Leader at Ernst & Young. TCA Advisor.
Robert Trager. Director, Oxford Martin AI Governance Initiative and International Governance Lead at the Centre for the Governance of AI.
Kenneth Cukier. Deputy Executive Editor of The Economist, and host of its weekly tech podcast.
Flynn Devine, researcher on participatory AI governance methods, including research with the Collective Intelligence Project and on 'The Recursive Public'. Co-Initiator of the Global Assembly for COP26.
Brando Benifei, Member of European Parliament and Co-Rapporteur of the European Parliament for the EU AI ACT.
Mohamed Farahat, member of UN High-Level Advisory Board on Artificial Intelligence. TCA advisor.
Kay Firth-Butterfield, CEO of Good Tech Advisory. Former Head of AI and Member of the Exec Comm at World Economic Forum.
Gordon Laforge. Senior Policy Analyst at New America Foundation. TCA Advisor.
Marco Landi, President of the EuropIA Institut. Former Group President and COO of APPLE Computers in Cupertino. TCA steering advisor.
Robert Whitfield, Chair of the Transnational Working Group on AI at the World Federalist Movement. Chair of One World Trust.
Paul Nemitz. Principal Advisor at the European Commission. Senior Privacy and AI policy expert. TCA advisor.
Axel Voss. Member of European Parliament and member of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), and the Committee on Artificial Intelligence in a Digital Age (AIDA).
Akash Wasil, AI Policy Researcher at Control AI.Former senior researcher at Center on Long-Term Risk and Center for AI Safety.
Muhammadou M.O. Kah. Professor and Ambassador Extraordinary & Plenipotentiary of The Gambia to Switzerland & Permanent Representative to UN Organisations at Geneva, WTO & Other International Organisations in Switzerland. TCA Advisor.
Jan Camenisch, Chief Technology Officer of Dfinity, a blockchain-based internet computer. Phd researched with 130 paper and 140 filed patents.
Aicha Jeridi, Vice President of the North African School and Forum of Internet Governance. Member of the African Union Multi-Stakeholder Advisory Group on Internet Governance.
Beatrice Erkers. Chief Operating Officer at the Foresight Institute.
Allison Duettmann. Chief Executive Officer at the Foresight Institute.
Lisa Thiergart. Research Manager at Machine Intelligence Research Institute (MIRI). AI Alignment Researcher.
David Wood, President of the London Futurists association.
Chase Cunningham. Vice President of Security Market Research at G2. Former Chief Cryptologic Technician at the US National Security Agency. Pioneer of Zero Trust. TCA advisor.
Darren McKee. Senior Advisor at Artificial Intelligence Governance & Safety Canada (AIGS). Author of “Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World”
Sebastian Hallensleben, Head of AI at VDE, Co-Chair of the OECD Expert Group on AI (AIGO), Chair, Joint Technical Committee 21 "Artificial Intelligence" at CEN and CENELEC.
John Havens. Exec. Dir. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Philipp Amann. Group CISO at Austrian Post. Former Head of Strategy EUROPOL Cybercrime Centre.
Ayisha Piotti. Director of AI Policy at ETH Zurich Center for Law and Economics.
Jan Philipp Albrecht, President of the Heinrich Böll Foundation. Former Greens MEP. Former Minister of Digitization of the German state of Schleswig-Holstein. TCA steering advisor.
Alexander Kriebitz, Research Associate at the Institute for Ethics in Artificial Intelligence
David Evan Harris, Chancellor's Public Scholar at UC Berkeley. Senior researcher at Centre for International Governance Innovation (CIGI), Brennan Center for Justice, International Computer Science Institute.
Richard Falk, professor emeritus of international law at Princeton University. Renowned global democratization expert. Chairman of the Trustees of the Euro-Mediterranean Human Rights Monitor.
Peter Park, MIT AI Existential Safety Postdoctoral Fellow and Director of StakeOut.AI
Pavel Laskov, Head of the Hilti Chair of Data and Application Security University of Liechtenstein
Albert Efimov, Chair of Engineering Cybernetics at the Russian National University of Science and Technology. VP of Innovation and Research at Sberbank.
Joe Buccino, AI policy and geopolitics expert. US Defense Ret. Colonel. TCA Advisor.
Tjerk Timan, trustworthy and fair AI Researcher. TCA Advisor.
Roberto Savio, communications Expert. Founder and Director of Interpress Service. TCA advisor.
Organizations
NGOs
Confirmed:
To-be-confirmed: (The following were confirmed for June 12th. They will be requested for confirmation for the new November 2024 date once it will be set!)
States:
Confirmed: none
Engaged: Last March, we held meetings in Geneva with the missions of four states to the UN, including two heads of mission (and ambassadors) and three mission's AI and digital domain experts.
AI Labs
Confirmed: none
Engaged: We have been in extended talks with middle executives of three of the top five US AI Labs, since November. We started to reach out to labs non-US leading AI Labs and NGOs in March.
Pre-Summit Speaking Participants
Individuals
Confirmed:
Ansgar Koen, Global AI Ethics and Regulatory Leader at Ernst & Young. TCA Advisor.
Jan Philipp Albrecht, President of the Heinrich Böll Foundation. Former Greens MEP. Former Minister of Digitization of the German state of Schleswig-Holstein. TCA steering advisor.
(moderator) David Wood, President of the London Futurists association.
Organizations
Confirmed:
Building Trust in Global Cooperation
The Trust Imperative. Given the urgency of AI proliferation and safety risks, it is perhaps fitting to sacrifice inclusiveness for efficiency in the immediate term, as the US/UK initiative seems to imply. Yet, ultimate success in globally enforcing the inevitable pervasive AI controls - without severely fostering military confrontations or expanding global injustices - will be impossible, we believe, unless a large majority of nations and citizens trust such initiatives to be genuine rather than instrumental to a further concentration of power and wealth in a handful of nations and companies.
The Risks of Global Governance. While the aim of adequately designed democratic global governance organizations is to decentralize the current and accelerating de-facto concentrations of power, there are concerns that these could excessively centralize power, be captured by some entity, or otherwise degenerate.
Adequate Constituent Processes and “Trustlessness”. Key to countering those concerns is, we believe, to enact constituent processes that are highly participatory, effective, expert, neutral and resilient, and then federal statutes with highly effective safeguards against those risks. To enable these, we need control and compliance sub-systems for advanced AIs and human digital communications that are much more trustworthy and widely trusted in their safety, security, privacy and democratic accountability through uncompromising trustless approaches to their design, governance and certifications, such as those applied to proper democratic election processes. These are needed to:
(1) sufficiently reduce the unaccountable power of state and non-state actors to sway public opinion, political leaders and diplomats worldwide via their control or hacking of digital communications; and
(2) ensure indisputable mechanisms for the assessment breaches of new global AI regulations, the lack of which contributed to the failure of many nuclear treaties.
Contacts
Logistics: info@trustlesscomputing.org
Participate, partner, donate: rufo@trustlesscomputing.org