At a Glance
We are honoured to invite esteemed representatives from state institutions, intergovernmental organizations (IGOs), artificial intelligence laboratories, distinguished non-governmental organizations (NGOs), and experts affiliated with the Coalition for the Harnessing AI Risk Initiative. This initiative, spearheaded by the Trustless Computing Association (TCA), aims to assemble a globally diverse group of stakeholders from various sectors to decisively address the multifaceted challenges associated with AI, through the democratic method.
Location:
Geneva, Switzerland
Event Date:
TBD November, 2024
Purpose of the Summit
Achieve preliminary agreement among a number of diverse states to design a timely, expert-led, multilateral and participatory treaty-making process for the creation of an open global treaty-organization. This organization will collectively develop and exploit the safest and most advanced AI technologies, and reliably ban unsafe ones. We are drawing inspiration from the successful and democratic intergovernmental treaty-making process initiated by two U.S. states during the Annapolis Convention and culminating in the ratification of the U.S. Constitution by nine out of thirteen states. Our summit aims to replicate this historical model on a global scale, focusing solely on AI.
Agree on the Scope and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications. These guidelines should embody robust participation, inclusivity, expertise, and resilience principles. The objective is to facilitate the formation of an intergovernmental body poised to consistently and effectively promote the safety, welfare, and empowerment of all individuals for many generations to come.
Achieve preliminary agreement among states, AI labs, investors, funders and technical partners on their participation in a democratic, partly decentralized public-private Global Public Benefit AI Lab and ecosystem.
Agenda
Day 1 will feature a combination of 40-minute panel discussions and 5-10 minute "lightning talks" presented by leading experts and NGOs.
Day 2, will include a variety of deliberative working sessions, educational sessions—both one-way and interactive—and both multilateral and bilateral meetings.
DAY 1
Each session will include one primary video-recorded track, and may feature up to two additional secondary tracks:
08.30 - 09.00: Welcome and Introduction: Hosted by the Trustless Computing Association in collaboration with local, national, and/or international authorities.
09.00-09.10. TBD Lightning Talk
09.10 - 09.45:
AI Risks: Extreme and Unaccountable Concentration of Power and Wealth (democracy, Inequality, civil rights, biases and minorities, unemployment and loss of agency). Human Safety Risks (loss of control, misuse, accidents, war, dangerous science). Risks’ comparative importance and timelines, shared mitigations, win-wins and synergies.
10.00-10.10
TBD Lightning Talk
10.10 - 10.45:
AI Opportunities: Abundance, Health, Safety, Peace, Happiness. Can future AI not only bring amazing practical benefits but even increase very significantly the average happiness and wellbeing of the average human?
11.00-11.10
TBD Lightning Talk
11.10 - 11.45:
AI Scenarios 2030+: (a) Mostly Business as Usual; (b) Global autocracy or oligarchy; (c) Human Safety Catastrophes or Extinction; (d) AI Takeover: Bad and Good Cases; (e) Humanity's Federal Control of Advanced AI.
12.00-12.10
TBD Lightning Talk
12.10 - 12.45:
Preliminary Designs: Federalism & Subsidiarity (global, nation and citizen levels). Checks and Balances. Complexity, Urgency, Expertise, and Acceleration. Transparency, participation, trustlessness and decentralization. Political, technical and future-proof feasibility of bans of unsafe AI. Win-wins for oversight, public safety, civil liberties and democracy. Democracy & monopoly of violence. Role of superpowers, firms and security agencies.
14.00-14.10
TBD Lightning Talk
14.10 - 14.45
Scope and Functions: An AI Safety Agency to set and enforce AI safety regulations worldwide? A Global Public Interest AI Lab, to jointly develop, control and benefit leading or co-leading capabilities in safe AI, and digital communications/cloud infrastructure, according to the subsidiarity principle? An IT Security Agency, to develop and certify trustworthy and widely trusted “governance-support” systems, for control, compliance and communications? Other?
15.00-15.10
TBD Lightning Talk
15.10 - 15.50
Constituent Process: Participation. Expertise. Inclusiveness. Weighted Voting. Global citizens’ assemblies. A Global Collective Constitutional AI?. Scope and Rules for the Election of an Open Transnational Constituent Assembly. Interaction with other constituent initiatives.
16.00-16.10
TBD Lightning Talk
16.10 - 16.50
Global Public Interest AI Lab: Viability. Decentralization vs Safety. Subsidiarity principle. Initial funding: project finance, spin-in or other model? Role of private firms. Business models. Safety accords with other leading state private AI labs. The Superintelligence/AGI “option”.
17.00-17.10
TBD Lightning Talk
17.10 - 17.50
Setting AI Standards: Technical, socio-technical, ethical and governance standards for the most advanced AIs. Agile, measurable and enforceable methods to assess AI systems, services and components that are safe and compliant.
DAY 2
The second day of the Summit will entail:
To-be-determined close-door, closed and open workshops, working session and self-organized meetings, whereby states and other participants will engage in fostering consensus on key documents detailing the constituent process, and preliminary designs of the resulting IGO.
Several educational sessions on the technical and non-technical aspects of advanced AI safety, security and privacy and governance. Mainly geared towards state representatives, and run by leading expert NGO participants.
Speaking Participants
Confirmed (subject to their availability for the our TBD November date)
Rufo Guerreschi, President of the Trustless Computing Association (TCA).
Ansgar Koen, Global AI Ethics and Regulatory Leader at Ernst & Young. TCA Advisor.
Robert Trager. Director, Oxford Martin AI Governance Initiative and International Governance Lead at the Centre for the Governance of AI.
Kenneth Cukier. Deputy Executive Editor of The Economist, and host of its weekly tech podcast.
Flynn Devine, researcher on participatory AI governance methods, including research with the Collective Intelligence Project and on 'The Recursive Public'. Co-Initiator of the Global Assembly for COP26.
Brando Benifei, Member of European Parliament and Co-Rapporteur of the European Parliament for the EU AI ACT.
Mohamed Farahat, member of UN High-Level Advisory Board on Artificial Intelligence. TCA advisor.
Kay Firth-Butterfield, CEO of Good Tech Advisory. Former Head of AI and Member of the Exec Comm at World Economic Forum.
Gordon Laforge. Senior Policy Analyst at New America Foundation. TCA Advisor.
Marco Landi, President of the EuropIA Institut. Former Group President and COO of APPLE Computers in Cupertino. TCA steering advisor.
Robert Whitfield, Chair of the Transnational Working Group on AI at the World Federalist Movement. Chair of One World Trust.
Paul Nemitz. Principal Advisor at the European Commission. Senior Privacy and AI policy expert. TCA advisor.
Axel Voss. Member of European Parliament and member of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), and the Committee on Artificial Intelligence in a Digital Age (AIDA).
Akash Wasil, AI Policy Researcher at Control AI.Former senior researcher at Center on Long-Term Risk and Center for AI Safety.
Muhammadou M.O. Kah. Professor and Ambassador Extraordinary & Plenipotentiary of The Gambia to Switzerland & Permanent Representative to UN Organisations at Geneva, WTO & Other International Organisations in Switzerland. TCA Advisor.
Jan Camenisch, Chief Technology Officer of Dfinity, a blockchain-based internet computer. Phd researched with 130 paper and 140 filed patents.
Aicha Jeridi, Vice President of the North African School and Forum of Internet Governance. Member of the African Union Multi-Stakeholder Advisory Group on Internet Governance.
Beatrice Erkers. Chief Operating Officer at the Foresight Institute.
Allison Duettmann. Chief Executive Officer at the Foresight Institute.
Lisa Thiergart. Research Manager at Machine Intelligence Research Institute (MIRI). AI Alignment Researcher.
David Wood, President of the London Futurists association.
Chase Cunningham. Vice President of Security Market Research at G2. Former Chief Cryptologic Technician at the US National Security Agency. Pioneer of Zero Trust. TCA advisor.
Darren McKee. Senior Advisor at Artificial Intelligence Governance & Safety Canada (AIGS). Author of “Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World”
Sebastian Hallensleben, Head of AI at VDE, Co-Chair of the OECD Expert Group on AI (AIGO), Chair, Joint Technical Committee 21 "Artificial Intelligence" at CEN and CENELEC.
John Havens. Exec. Dir. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Philipp Amann. Group CISO at Austrian Post. Former Head of Strategy EUROPOL Cybercrime Centre.
Ayisha Piotti. Director of AI Policy at ETH Zurich Center for Law and Economics.
Jan Philipp Albrecht, President of the Heinrich Böll Foundation. Former Greens MEP. Former Minister of Digitization of the German state of Schleswig-Holstein. TCA steering advisor.
Alexander Kriebitz, Research Associate at the Institute for Ethics in Artificial Intelligence
David Evan Harris, Chancellor's Public Scholar at UC Berkeley. Senior researcher at Centre for International Governance Innovation (CIGI), Brennan Center for Justice, International Computer Science Institute.
Richard Falk, professor emeritus of international law at Princeton University. Renowned global democratization expert. Chairman of the Trustees of the Euro-Mediterranean Human Rights Monitor.
Peter Park, MIT AI Existential Safety Postdoctoral Fellow and Director of StakeOut.AI
Pavel Laskov, Head of the Hilti Chair of Data and Application Security University of Liechtenstein
Albert Efimov, Chair of Engineering Cybernetics at the Russian National University of Science and Technology. VP of Innovation and Research at Sberbank.
Joe Buccino, AI policy and geopolitics expert. US Defense Ret. Colonel. TCA Advisor.
Tjerk Timan, trustworthy and fair AI Researcher. TCA Advisor.
Roberto Savio, communications Expert. Founder and Director of Interpress Service. TCA advisor.
Organizations
NGOs
Confirmed (subject to their availability for the our TBD November date)
States and IGOs:
Confirmed:
The Mission of Gambia to the UN in Geneva
Engaged:
In March, we conducted meetings with the United Nations missions in Geneva from four states, which included three heads of mission (ambassadors) and three experts in AI and digital domains. We are currently engaging with three additional missions. Collectively, these states, primarily from Africa and South America, represent a population of 120 million, combined GDP of $1.4 trillion, and manage sovereign funds totaling $130 billion. In early April, we received a written expression of interest from the ambassador to the United Nations in Geneva representing one of the three largest regional intergovernmental organizations, which encompasses dozens of member states.
AI Labs
Engaged:
Since December, we have been in extended talks with 3 of the 5 top AI Labs about their interest in participating in the Global Public Interest AI Lab.
Pre-Summit Virtual Conference (June 12th, 2024)
We are honoured to invite esteemed representatives from distinguished non-governmental organizations (NGOs) and experts affiliated with the Coalition for the Harnessing AI Risk Initiative.
Online Event Date:
June 12th, 2024
Pre-Summit Purpose
Consolidate and expand a Coalition for the Harnessing AI Risk Initiative, composed of geographically diverse and unbiased non-governmental organizations (NGOs), experts, influential figures, and former public officials. This coalition aims to enhance the initiative's momentum and credibility with states and regional intergovernmental organizations (IGOs).
Secure agreement on Version 4 of the Open Call for the Harnessing AI Risk Initiative and finalize other related documents.
Produce and disseminate testimonials, articles, publications, and videos to promote, explain, and advocate for the Initiative.
Pre-Summit Agenda
15.30 - Online Panel:
AI Risks and opportunities: the prevailing science16.00 - Online Panel:
Treaty-making for technological risks: nuclear, bioweapons, encryption, climate16.30 - Online Panel:
Treaty-making for AI: the open intergovernmental constituent assembly model17.00 - Online Panel:
Mitigating the risks of competing AI coalitions, AIs and AI governance initiatives.17.30 - Online Panel:
Foreseeing and navigating complex socio-technical future AI scenarios18.00 - Online Panel:
Open Call for the Harnessing AI Risk Initiative (v.4)
Pre-Summit Speakers
A globally-diverse set of NGOs, and experts in AI and global governance:
Ansgar Koen, Global AI Ethics and Regulatory Leader at Ernst & Young. TCA Advisor.
Jan Philipp Albrecht, President of the Heinrich Böll Foundation. Former Greens MEP. Former Minister of Digitization of the German state of Schleswig-Holstein. TCA steering advisor.
(moderator) David Wood, President of the London Futurists association. TCA Advisor.
Contacts
Logistics: info@trustlesscomputing.org
Participate, partner, donate: partnerships@trustlesscomputing.org