Harnessing AI Risk Proposal (v.3):
Transforming our Greatest Threat into Humanity's Triumph

Trustless Computing Association

January 15th, 2024


ABSTRACT

This proposal examines the immense risks and opportunities posed by AI. It then reviews current global governance proposals to manage them, observing their fragmentation, lack of detail and inclusivity, and overemphasis on the role of a handful nations and firms. 

It argues that an approach to the design and constituent process for such governance systems that reconciles expertise, timeliness and agility with participation, inclusivity and neutrality, is the most fitting to promote both shared benefits and control, and broad compliance to the global safety requirements that will be needed. It then details a participatory constituent process for a new global intergovernmental organization for AI and digital communications. 

To stimulate concrete and effective treaty negotiations among states, it explores and details a uniquely comprehensive design for a new open federal intergovernmental organization comprising three agencies: an AI Safety Agency to set and enforce AI safety regulations worldwide; a Global Public Benefit AI Lab Agency to jointly develop, control and benefit from the most capable safe AIs, according to the subsidiarity principle; and an IT Security Agency to develop and certify trustworthy and widely trusted governance-support systems. 

Finally, it explores how its success could produce a governance model extensible to other dangerous technologies and global challenges. 


Author: Rufo Guerreschi, President of the Trustless Computing Association (TCA)
Versions: A v.1 of this paper was published and introduced to the Community of Democracies at the UN last June 28th, 2023, while a v.2 was published as a pre-print on October 3rd 2023. A v.4 is planned for March 2024.
Acknowledgments: We owe a debt of gratitude to the following people for their support, feedback and conversations that contributed to this text: Marco Landi (President at EuropIA Institut. Former President of Apple. TCA steering advisor), Paul Nemitz (Principal Adviser for Justice of the EU Commission. TCA advisor), Roman Yampolskiy (Associate Professor at University of Louisville), Ansgar Koene (Global AI Ethics and Regulatory Leader at Ernst Young. TCA advisor), Tjerk Timan (Principal Consultant at Technopolis. Former Trustworthy AI lead at TNO, TCA advisor), Jan Camenisch (IT security scientist. CTO at DFinity, Ph.D. ETH Zurich), Akash Wasil (Former Researcher at Center for AI Safety and Stanford Existential Risks Initiative. TCA Advisor).

* CORRECTION: The paper contains an erroneous link to Anthropic CEO Amodei’s 7-minute video segment from a recent interview, here is the correct one.