How radically more secure IT and governance structures can help mitigate AI risks and realize its promise.

Recent shocking open letters, calls and public statements by AI leading researchers and CEOs, have made it increasingly clear that AI poses a far greater risk (and opportunity) for humanity than previously thought, and one that is much closer in time. 

As a consequence, the accelerating pace of AI development will have a huge impact - positive or negative remains to be seen - on all other existential and catastrophic risks, including climate, nuclear, and pandemics.

Those AI risks are both to democracy and civil freedoms, deriving from misinformation, as well as catastrophic and existential risks, deriving our potential loss of control of advanced AI systems.

Mitigating AI risks, and fostering its opportunities, requires primarily that a critical mass of humanity join together to wisely, competently, and democratically exert control over AI development, nature, safety constraints, security, safety, and privacy while leaving as much as possible states, communities and individuals the freedom to safely configure, use and modify those tools as they see fit.

How the Trustless Computing Association plans to help

We are helping towards such a vision by building a new open and participatory inter-governmental body, the Trustless Computing Certification Body (TCCB), established last June 2021 in Geneva, that is setting up new socio-technical standards and certifications for IT systems and services that ensure levels of security, privacy and democratic-accountability that are radically beyond state-of-the-art - down to client and server chip designs and their fabrication oversight, to data center socio-technical design, and especially the constituent processes and design of its governance structures.

It will be initially focused on an ultra-secure minimalistic endpoint platform for client and server systems, aimed at mobile computing and communications of the most targeted individuals - like diplomats, heads of state, scientists and journalists - via the realization of a TCCB-compliant multinational cloud and 2mm-thin, standalone, ultra-secure and minimalistic mobile phones, to be carried in a custom leather wallet or embedded in the back of every ordinary smartphone, together called Seevik Net.

Once battle-proven as the most secure certifications and socio-technical platform for low-level IT endpoint hardware/software, we expect extensions of TCCB and Seevik Net to be widely adopted and mandated by some nations as the default socio-technical standard for the most critical subsystems of advanced AI - like large LLMs like ChatGPT, AIs defining the social media feeds of billions, and/or involved in critical military or surveillance activities.

They will run all critical functions inside and around advanced AIs’ hyper-complex "black boxes" such as firmware updates, security monitoring, value systems, as well as Pre-deployment Controls (e.g., Adversarial Testing, Red Teaming, Automated Validation of Updates), Runtime Controls (e.g. Safelists and Blocklists, Real-time Monitoring, Supervised System) and Post-deployment Controls (e.g. Feedback Loop, Regular Audits, Ongoing Learning).

In addition, the much stronger authentication and integrity offered by Seevik Net client devices would enable (1) much stronger watermarking of human and AI-generated content and (2) ensure that a specific human is on the other side instead of an AI-powered chatbot.

Our work so far on AI security, privacy, and safety

This vision has been an integral part of our research and technological plans since our first R&D initiatives of 2015-2016, and with its 4 Challenges for Freedom and Safety in Cyberspace of our Free and Safe in Cyberspace conference series, held then in 10 editions around 3 continents, with top IT and AI speakers, since its very first edition in 2015.

In November 2016, we were invited as the main keynoters of the Trustless Computing Initiative & the Future of AI Symposium held in our honor at the Symbolic Program of Stanford University to elaborate on how Trustless Computing could contribute to the positive future of humanity and AI.

In 2019, we established in Geneva a startup spin-off with the name TRUSTLESS.AI Sàrl focused on creating such endpoints, initially focused on sensitive mobile human computing, as our startup “beach head”.

Since then, we published several blog posts on the IT security research needed for AI and Artificial Super Intelligence (2019 post), the need for much more resilient and trustless IT systems and governance structures (2019 post) to make AI secure and aligned enough - and how those very stringent requirements can promote economic development rather than constrain it (2019 post).

Through to June 2023, we'll be holding joint and bilateral meetings in Geneva and via Zoom as part of our 11th Edition of the Free and Safe in Cyberspace with over 12 nation-states, IGOs, and neutral INGOs interested in becoming governance or cofounder partners of the Trustless Computing Certification Body and Seevik Net Initiative.

Join us to safeguard our beautiful world, realize a utopia and stave off dystopias!

Rufo Guerreschi