Can OpenAI's "$7 trillion plan" become the seed of a global, federal, safe and democratic ecosystem, lab and safety agency for AI?

The reported $7 trillion plan by OpenAI to build a consortium of firms and states to globally boost and dominate the supply chain of AI chips and systems could be an extremely beneficial initiative. That will be the case only if its control is credibly on its way to being "democratized to all of humanity" in a highly multilateral and participatory way, within months, and not in years as pledged by Altman last March, and done very carefully. An initiative by a critical mass of NGOs and diverse states is needed to make it happen.

(for comments, like and share, please refer to the Linkedin version of this post)


Last week, the World Street Journal reported of a plan by OpenAI to increase the "global infrastructure and supply chains for chips, energy and data centers" in support of the needs of AI and other industries that "could require raising as much as $5 trillion to $7 trillion", quoting several unnamed sources and its spokeswoman.

The plan seeks to aggregate funders, chip makers, power providers and governments, in a sort of consortium building "chip foundries that would then be run by existing chip makers." OpenAI would commit to being its client (along with many other firms and states?). Five days later, the report was not denied by OpenAI, and largely confirmed by Altman in a tweet. So, we should assume it to be largely true.

In a seemingly unconnected development, today, at the World Government Summit in the UAE - a state mentioned in such an article as key interested funder of such a plan - Altman was asked by UAE Minister for AI what he would do if he was in his shoes for a day.

He replied that he would host a one day conference for the creation of an "IAEA for AI", which the interviewer accepted. A few minutes later, questioned about how to go about creating a wider global governance of AI, he said "it is not up to us, but we have ideas".

$7 trillion sounds outlandish, but makes sense if seen in context.

First, estimates or capital requirements for a venture in Silicon Valley can well refer to foreseen successive "investment rounds" and include funding commitment tied to performance objectives. So, the project could very well amount to a few hundred billion as "startup cash."

The semiconductor industry is expected to grow 17% to $600bn in 2024. If growth were to increase to 25% per year over the next 10 years, it would reach $5 trillion. 

The valuation of NVIDIA, which has an 80% share of the AI (training) chips market, grew 233% in 2023 and 41% in January alone, reaching $1.75 trillion. It is further entrenching its position via fast AI-driven innovation, its CUDA software lock-in power, by training internally its own state-of-the-art LLMs, and via investments in its main clients. 

Its CEO Jensen Huang stated last week that he does not have any concerns about AI safety risks, not even about the wide availability of open-source advanced AI technologies, which he advocates without restrains, as Meta does. Its main investors are a handful of giant private financial institutions, which are largely the same as Meta and other Big Tech.

The "scaling laws" of LLMs have proven consistent, with a constant increase of about 10x in the computing power dedicated to training leading LLM models by several companies over the last 9 years, all based on mostly similar transformer technology since 2017. Meta announced it would seek to outspend its competitors by buying $10 billion in AI GPUs to train its LLM and pursue "AGI." 

Hence, the race for AI dominance has moved largely to securing the most available hardware capabilities. NVIDIA is in a credible position to "corner the market" of AI, and the economy, if LLMs remains for a few more years the leading advanced AI technique. Hence, its skyrocketing market valuation.

OpenAI's $7 Trillion Plan would counter this concerning status quo by joining together a few key firms and states to dominate AI by accruing in a single consortium of sort a large majority of AI chips designers, foundries and investing entities, such as TSMC, ASML and Samsung and the United States, Taiwan, South Korea, Netherlands and oil-rich arab states, like the UAE, and seemingly excluding the likes of NVIDIA and China. Yoshua Bengio suggested a similar decentralized consortium approach for a network of coordinated leading AI labs last September.

While it is very possible that new hardware and/or software architectures may arise, even soon, that prove to be more or much more efficient ways to increase the capabilities of the most advanced general-purpose AIs, they have not emerged yet, even after a huge increase in research investment in recent years. It is, therefore, plausible to foresee they could not emerge in the next 5 or possibly even 10 years. Also, the proposed consortium would likely also invest heavily in such new architectures and be ready to adapt to their emergence.

Over the last year, nearly all major governments have initiated projects to build their own sovereign AI capabilities by creating their own AI hardware infrastructures and/or fostering and subsidizing private national firms.

Hence, OpenAI's $7 trillion Plan project makes sense economically as such sum is the amount needed over several years to seriously take on hyper-fast-growing NVIDIA, and there would be many potentially states willing to fund it via their sovereign funds to ensure future access to state-of-the-art AI capabilities, and sovereign control over them, without trying hopelessly to do it on their own. 

Assuming it makes economic sense, would it be good for humanity?

The plan's impact on the environment could be very negative. Yet, if a large part of those funds are dedicated to clean energy research, like nuclear fusion, and they pay off in time - a big question mark - and powerful AI fast-tracks innovations to counter climate change, the impact on climate change could be much alleviated or even net positive.

Competition for rare earth materials would likely become an even bigger geopolitical issue, which can only be mitigated or solved by sweeping and appropriate treaties, while the massive water use required would need substantial innovation and incentives for water recycling.

But its potential impact on humanity goes way beyond the environment.

The "AI pause" and many other warnings by a majority of AI researchers, leading labs and scientists went completely unheard of for practical purposes. While very partial, late and fragmented initiatives have been tabled, the reality on the ground is that a few leading AI labs and superpowers are in a break-neck winner-take-all race to achieve the highest capabilities before the other does, inevitably sacrificing safety precautions.

It has become clear to many that only unprecedented global coordination can prevent the two biggest risks - extreme concentration of power and wealth, on the one hand, and immense human safety risks due to misuse, accidents and loss of control, on the other - while posing its own risks.

As it is, the $7 trillion plan would not be good news for those two immense risks of AI. It would embolden those left out of it, such as China and shareholders of NVIDIA, to redouble in the break-neck race. It would make it even more clear to the remaining 190 plus states and their citizens that they will be left out, hoping for steady and sizable "hand outs" in the future by their digital masters.

Their only hope would be that such an entrenched global AI oligopoly is stable enough that safety would be jointly tackled, with no say in shaping the future of humanity and AI. But those hopes would struggle to find a rational basis, where so many states and people left out would be very hesitant to be obliged to unilaterally manage global safety rules perceived as a means to entrench power in the hands of a few. 

Yet, in the context of OpenAI's wider stated strategy it can be great news for all, if its planned governance transition timelines are adjusted to today's reality.

Let's start by acknowledging that Sam Altman has gained overwhelming de-facto control over OpenAI since the firm's governance crisis last November when he was fired and then reinstated after a letter by 95% of its employees threatening resignation. 

Having read and watched everything by Sam Altman over the last year, it has emerged in at least six video interview segments that he has firmly pledged to ensure that the governance power over OpenAI, and advanced AI in general. He pledged over and over that such power should not accrue to himself (implicitly jointly with his host country), nor to the current OpenAI board, and nor (implicitly) to a few firms or a few states joining the $7 trillion Plan, but it should be "democratized it all of humanity." But later.

Largely overlooked by mainstream media, last March, Sam Altman called for a global constituent assembly akin to the U.S. Constitutional Convention of 1787 to establish a federal intergovernmental organization to manage AI, in a decentralized and participatory way, according to the subsidiarity principle

Far from an extemporaneous statement, Altman later stated that power and control over OpenAI and advanced AI should eventually be distributed among all citizens of the world.  In a later video interview, he stated, “we shouldn’t trust” OpenAI and its board unless it ultimately transfers its power to "all of humanity" if "years down the road will not have sort of figured out how to start doing that." He repeated much the same after OpenAI’s governance crisis. He repeated how it should be all of humanity shaping OpenAI and AI this January. 

Given how eager he has appeared to push ahead with "AGI" and Superintelligence, while loudly sounding safety alarms, would his company ever really stop building AGI if the public told them to? In a November Time interview he replied unequivocally, "We'd respect that," which is comforting.

So, if Altman is true to his words, as he has largely been to date, he'll "democratize to all of humanity" such $7T plan - via a global federal constituent assembly and via offline and online citizen participation techniques they are reviewing and developing with partners - albeit "years down the road."

If that happens properly and in time, It could be very good and crucial to humanity if he succeeds in democratizing AI to all of humanity in a reliably and durably beneficial way while avoiding clashes of his consortium with states, groups of states, non-state entities, and their AIs or AGIs, that may be left out or may decide to stay out.

With the hyper-acceleration of AI innovation and investment since last March, when Altman first talked about "years down the road" for starting its planned OpenAI governance transition, and with the $7 Trillion Plan launched, such a time frame cries to be shortened if it is to be credible or successful. 

Building a global governance for AI that is participatory, democratic, and expert will take time and highly effective and high-bandwidth international negotiations to do properly.

The best guarantee that such global governance will have such characteristics is by ensuring that the constituent process leading up to is also imbued by such characteristics, such as via a well-designed international constituent assembly. While it is true that post-WW2 democracies in Japan and Germany were externally designed by WW2 winners, and they succeeded, those examples do not apply at the global level.

Lenin thought a vanguard intelligentsia should conquer power first and then establish real democracy (i.e. "dictatorship of the proletariat"). It was attempted in many countries, but never worked. 

Martin Buber explained why with masterly poetry, "One cannot in the nature of things expect a little tree that has been turned into a club to put forth leaves.” 

The time for Altman to "pass on the baton" of control of OpenAI (and advanced AI in general, given the scope of his $7T plan!) may be coming nearer and nearer, given the hyper-acceleration of recent months in investment and technical progress.

If Altman decides to start "passing the baton" sooner, how could he proceed?

While ingenious and very well-intentioned, OpenAI group structure was - and still is for what we know - highly novel and untested and resulted in a risk of ousting him, killing the company, and ended up in a de-facto "one man" governance. 

While some novel online democratic participatory techniques and tools introduced in recent decades and years were successful, many others have resulted in reverse effects on democracy, such as online voting, voting machines, and many online participatory experiences. 

One of the key challenges will be to gather the right group of entities and experts that can credibly and legitimately advance plans for the constituent processes for such global governance mechanisms and institutions.

Just as OpenAI's research in Democratic inputs to AI relied on battle-tested and state-of-the-art participatory methods, it should also rely on (and advocate for) the science around battle-tested and state-of-the-art constituent assembly processes developed in the 250 years since the 1787 US Constitutional Convention, that was mentioned by Altman.

Being based in the US, he would be highly pressured to align with the US national security and competition policies to aggressively prevent China, and states working with China on AI, from having access to technologies related to advanced AI chips. OpenAI would have to align with the Guidelines for Secure AI System Development by the US and UK national security agencies, as well as by the US National AI Safety Institute Consortium which is expanding aggressively abroad, which evidently overlaps with OpenAI's just proposed consortium.  

But then again, just as in the end the US announced last week that it wants to work with China on AI Safety, it is likely or at least possible that it'll soon come to realize it needs to work with the rest of the world in a highly multilateral way given the specific nature of AI and how hard it may end up being limiting its dangerous proliferation.

How can OpenAI, a US-based firm - in the current US geopolitical rhetoric - pursue a plan to "democratize AI to all of humanity" as long as his country sees it as critical for its economic competition and national security to maximize its unilateral control or hegemony? 

Well, he can't, directly. That's why he is calling on the UAE, and implicitly other states, to move on to do so. But it wouldn't be fair for one state to lead this, as there would be the risk and the perceived risk that they may try to "put their hat" on it, as the UK largely did with their UK AI Safety Summit last November. 

Yes,  we need a first mover, like the UAE stated today it wants to be. But ultimately a coalition of a critical mass of globally diverse states and NGOs to drive it forward.

We are doing just that with our Harnessing AI Risk Initiative, starting with a Harnessing AI Risk Summit, in Geneva next June 2024. The initiative, open to all states, is aggregating a critical mass of globally diverse states and non-state entities to build an international AI Safety Agency, and a partly-decentralized Global Public Interest AI Lab that plans to secure its supply chain.

Such an Initiative could be the means for Altman to realize its clearly stated vision and for all of humanity to benefit enormously from this incredibly powerful and consequential technology.

Should Altman not follow through with his stated vision for the governance of AI and does so in a shorter time frame, the success of such Initiative would still constitute a truly multilateral, neutral, large and credible alternative, open to all states to join on equal terms at any point in the future. 

It would radically increase the leverage of participating states and firms in relation to dominant state and non-state entities, consortiums and international AI governance initiatives, and seek to merge with large initiatives, act as the most neutral platform for their convergence, and compromise where it must.

Rufo Guerreschi, President of the Trustless Computing Association

Rufo Guerreschi