Our Vision 2050

Version of May 3rd, 2023

(Best if read after our Mission and Vision 2030)

Regardless of what issues will be central to this decade of multiple and intersecting crises, the following ones will most likely be all about how and who will shape and govern ever more powerful IT and AI innovations, and how those will succeed or fail in safeguarding or increasing the wellbeing and safety of humans and other “human-like” conscious beings.

A technical and governance challenge

Two factors will define the quality of such coexistence.

First, the nature of technical and socio-technical innovations in designing and managing the most advanced IT and AI systems. It is very hard to foresee what those will be: can artificial consciousness be engineered? What control and safety innovations will be conceived to ensure they will benefit humanity?

Second, and most importantly, the governance and ownership models that will shape or control the R&D, operation or deployment of those innovations and their dominant deployments, and their coordination dynamics, and mechanisms, if any.

Will democratic inter-governmental institutions have a role, or will a few large private corporations and 2 superpowers call the shots?

In the 2014 words of the Nobel-prize winner Stephen Hawking, "whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all"” 

Will we be led a few masters and their IT and AI?

Currently, such control is in a small group of global masters made up of a handful of corporations, tech billionaires and national superpowers, driven by dubious wisdom, shaky coordination, pursuing personal visions of the future, and in a quest for power and wealth.

We’ve seen that in the human communication domain. Letting the IT and AI that define our social media feeds be controlled - through ownership, regulation and lack of regulation - by a few profit-maximizing corporations and the narrow interests of a few national security agencies, has increased disinformation, authoritarianism and divisions, within and among nations, increasing the risks of war and civil war. 

Those global masters are increasingly accruing global power and wealth, via far-widening informational, propaganda, and hacking superiority, and while engaging in a winner-take-all race for AI dominance. The logic of such a race forces them to sacrifice any concerns about AI safety, control, and "human alignment" to pursue the all-important goal of achieving AGI before others do. 

As for other catastrophic threats to humanity, AI is a global problem, so it can only be managed in the interest of humanity through much deeper and more structural global cooperation and coordination among nations and citizens. Only that can reliably enforce AI safety commitments and standards. Only that can safely manage the unprecedented global oversight necessary to manage the risk of increasingly accessible and destructive AI-powered technologies. 

The global governance we need to handle AI

We’ll need to repeat, at the global level, the federation constituent processes that led to the creation of the federal republics of the United States, Germany and Switzerland. They were not created by imposition on all, nor by the unanimity-based confederal processes that lead to the EU and the UN, where every or a few nations have veto. Initially, just three or ten lead the way, joined then by more, and then by most or all states sharing some geographical and cultural identities.

Yet, such a global concentration of power is itself a primary catastrophic and existential threat because it can increase the already high risk of entrenching a "durable form of inhumane global governance." So, it is all-important that the constituent processes leading to new multi-national federal institutions be made reliably democratic, competent, and resilient to maximize their chance of producing adequate institutions and prevent a World War in the process. 

The stakes are enormous. Moderate outcomes are very unlikely. If we fail, we'll continue on the current patterns of control, heading straight toward dystopia and increased catastrophic and existential risks. If we succeed - by wresting control over AI capability explosion and IT security into proper human institutions - we could progressively realize previously unimaginable levels of human well-being. We could eradicate poverty, scarcity, and unpleasant work; radically improve our health and safety via all kinds of AI-powered personal assistants; significantly advance freedom and democracy, via IT and AI that empowers us instead of enslaving us.

A digital platform for effective global governance

Key to that will be IT, which will enable fair and effective global dialogue and decision-making, while also enabling at one civil freedom and intense levels of surveillance to prevent dangerous, irresponsible abuse of AI and other technologies. The Trustless Computing Certification Body and Seevik Net initiative is on their way to building such governance and IT infrastructure, by leveraging only proven, battle-tested technical and organizational methods. It will initially be for the elected officials and diplomats, and then for communication and social networking of all.

Likely Scenarios after reaching Artificial General Intelligence

According to a dated survey of AI experts, sometime between the next decade and the end of the century, we'll likely see AIs that surpass the capabilities and intelligence of humans in nearly all activities, including AI innovation (or artificial general intelligence, AGI). 

Such AGIs will likely enter a runaway series of self-improvement cycles, causing a sort of "explosion" in intelligence at an incomprehensible rate (or superintelligence explosion). 

These capabilities will notably include persuading and manipulating humans and accelerating innovation in all scientific fields (or technological singularity).

This explosion will create one or more artificial superintelligences, or ASIs, which will likely eventually merge into one. It will be the most significant historical event since the origin of biological life on earth, with extreme and unpredictable consequences.

The governance structures around AI - or the “who controls it,” as Hawking put it -  are also the key factor influencing its overall long-term impact. In particular: (1) whether any human control will be retained, the quality of such human control; and (2) whether its impact is beneficial or harmful if control fails if anything can be done to influence that.

Given the dangers, should we try to prevent AGIs? Can we prevent AGIs?

Given the wild unpredictability of ASI, can we reliably prevent the development by anyone of ASI or delay it until we feel we are ready?

As for nuclear weapons and bio-weapons, it will be impossible to ensure a complete ban on the research and stockpiling of advanced AIs, AGIs or ASIs because we cannot assure their total elimination. Some capabilities and stockpiling will need to be maintained, and sufficiently mitigating irresponsible AI development or use will have to be managed by global oversight and coordination, similar to that exercised by the international organizations IAEA for nuclear and OPCW for biological and chemical - but significantly more powerful.

Yet, all that may not be sufficient. What we think is the most likely-effective proposal is the one proposed by renowned AI researcher Ben Gortzel in 2011 with his AI Nanny proposal, which argued we should create a trustworthy global democratic institution or coalition, involving the greatest powers, decides to “create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers" to “protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligence(s) until and unless the safety issues are solved.” 

Given that such AI Nanny would entail a large number of technical risks, maybe a better solution would be for such a world coalition or government to build something in between a democratic multi-national IT/AI infrastructure as the NSA is likely building, and an AI Nanny, whereby overall complexity is made more manageable by splitting the risk between political and technical infrastructures that we can better understand while being able to leverage some advanced AI for security and well-being of humans.

Can we retain beneficial human control over AI?

If ASI remains under human control, its impact on humanity will depend primarily on the quality of the decision-making - or governance - of the corporation, nation, and/or alliance of countries that will control it, and their coordination mechanisms, if any. These could have a single or small number of ultra-rich persons, possibly with life extension.

If ASI escapes human control, as argued by Roman Yampolskiy and many others, they will produce a run-away AI and eventually a likely AI Takeover. We’ll likely never know if ASI will develop an actual intent of consciousness, or just simulate them. Regardless, only its behavior will matter, which may be beneficial or harmful to humanity. 

The prospect that ASI will retain human control is improbable. But then again, there are scenarios of uncontrolled ASI that may be more beneficial to the wider interest of humanity, and even its survival, than some scenarios of retained human control. It is hard to assess the probabilities of each scenario. Sure, we should attempt to retain human control, but not at all costs. 

A Takeover ASI could be harmful or fatal for humanity, due to a likely inscrutable unpredictability of ASIs' self-evolutionary dynamics, or other reasons like competition over the use of limited Earth resources. No malevolence is needed on the AI part, but just valueless goal optimization, self-evolutionary dynamics, and self-preservation.

 A Takeover ASI could instead be beneficial for humanity. It may autonomously effectively pursue the well-being and the best aspirations of most humans, present and future - likely along with new forms of digital sentient beings - for some of the same reasons we value the life and well-being of other animals. Such beneficial ASI could reliably and sustainably promote human well-being, flourishment, and growth, realizing a sort of "Artificial General Wisdom." It could protect humanity against catastrophes, existential threats, and war, including war among AIs and AIs versus humans (as envisioned in the movie Colossus: The Forbyn Project).

How can we positively influence our future coexistence with AI?

It will be very tough to infer which human actions, if any, will meaningfully and positively influence the nature of an ASI or a Takeover ASI, in the short or long term.

But, given its vast importance, we should make it one of our highest priorities to try. 

On first analysis, the most directly influential factors are the nature, and quality of the innovations in technical designs, operation, training, configurations, and human control constituting such ASI, before they arrive, if ever, at Run-away AI and then Takeover ASI.  

Yet, those are, in turn, wholly dependent on the quality of the governance structures behind the creation and adoption of such innovations or techniques, rather than others, and to what extent those can be expected to maximize the interests of the largest majority of humans, be competent, and be resilient against technical and organizational failures, and compromisation attempts.

The effectiveness of such ASI governance structures will mainly rely on: (1) the quality of their statutes, with their checks and balances to ensure accountability, transparency, and competency; (2)the methods for selection and key incentives and disincentives resting on key staff in architecture, research, and operations; and (3) the trustworthiness of technical, organizational, and socio-technical systems that are critically-relied on by ASI and its governance structures, from failure, or undue external hacking, compromisation or influence.

We envision the socio-technical security paradigms and governance models pioneered by our Trustless Computing Certification Body and its Trustless Computing Paradigms - and the unique levels of trustworthiness and accountability of TCCB-compliant communications systems, endpoints, and later AIs and ASI root-of-trust systems - can play a key role in maximizing the beneficial qualities of advanced AIs and their human governance structures, if any there will be, and so, therefore, increase the chance of beneficial ASI or an “AI Nanny.”

Ensuring much higher security AND democratic control over the root-of-trust systems of critical AIs and ASIs - such as value systems, security monitoring and firmware upgrade - may be the most we can do to influence in the short and long term their high-level direction and motivation. In a word, their “conscience,” their “soul,” regardless if it’ll be “simulated” or “real.”

Conclusions

In conclusion, for the next decade decade, we envision playing a role in sustainably reducing catastrophic risks, by enabling stronger global cooperation. In the following ones, we envision playing a role in turning AI from an existential and catastrophic threat into the main instrument to protect us from other similar threats and realize humanity's fullest potential  - or honorably fail while collectively giving it our best shot.