Global Public Interest AI Lab

At A Glance

The Global Public Benefit AI Lab will be a $15+ billion, open, partly-decentralized, democratically-governed joint-venture of states and suitable tech firms aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.

The Lab is one of three agencies of a new intergovernmental organization being built by the Harnessing AI Risk Initiative, a venture to catalyze a critical mass of globally-diverse states in a global constituent processes to build a new democratic IGO and joint venture to jointly build the most capable safe AI, and reliably ban unsafe ones - open to all states and firms to join on equal terms.

  • The Lab will be an open, partly-decentralized, democratically-governed joint-venture of states and suitable tech firms aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.

  • The Lab will accrue capabilities and resources of member states and private partners, and distribute dividends and control among member states and directly to their citizens, all the while stimulating and safeguarding private initiative for innovation and oversight.

  • The Lab will be primarily funded via project finance, buttressed by pre-licensing and pre-commercial procurement from participating states and client firms.

  • The Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia, through joint investments, diplomacy, trade relations and strategic industrial assets of participant states - while remaining open to merge with them on equal terms, as detailed in our recent article on The Yuan.

Financial Viability and the Project Finance model

The Lab will generate revenue from governments, firms and citizens via licensing of enabling back-end services and IP, leasing of infrastructure, direct services, and issuance of compliance certifications. 

Given that the proven scalability of capabilities, value-added and profit potential of current open source LLMs technologies - and the possibility of extensive pre-commercial procurements contracts with states could buttress its financial viability - the initial funding could follow primarily the project finance model, via sovereign and pension funds, intergovernmental sovereign funds such as the EIB and AIB, sovereign private equity and private international finance

The undue influence on the governance of private funding sources will be limited via various mechanisms, including non-voting shares. 

Precedents and Model

The initiative could take inspiration from the current governance of the CERN, a joint venture for nuclear energy capability-building that was started in 1954 by EU states and only later opened to non-EU ones, with a current yearly budget of $1.2 billion. The $20 billion international consortium ITER for nuclear fusion energy is also an inspiration.

Size of Initial Funding

Since the cost of state-of-the-art LLMs "training runs" are expected to grow 500-1000% per year, and many top US AI labs have announced billion-dollar LLM training runs for next year, the Lab would need an initial endowment of at least $15 billion to have a solid chance of achieving its capacity and safety goals, and then financial self-sustenance in 3-4 years. If such an amount seems high, consider it would likely increase by about 5-10 times for every year this initiative is delayed.

Supply-Chain Viability and Control

Acquiring and maintaining access to the specialized AI chips needed to efficiently run leading-edge LLM training runs will be challenging given a foreseen intense increase in global demand and export controls. 

This is a risk that can likely be sufficiently reduced via joint diplomatic dialogue, appealing to the open and democratic nature of the initiative, and by attracting participating states hosting firms owning suitable AI chips designs, or possibly start pursuing its own AI chip designs, and chip manufacturing capabilities, and invest in new safer and more powerful AI software and hardware architectures, beyond large language models. 

Ensuring sufficient energy sources, suitable data centers, and resilient network architecture among the member states, would require timely, speedy and coordinated action for the short term and careful planning for the long term.

Hence, the Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia, through joint investments, diplomacy, trade relations and strategic industrial assets of participant states - while remaining open to merge with them on equal terms, as detailed in our recent article on The Yuan.

Talent Attraction Feasibility

Key to achieving and retaining a decisive superiority in advanced AI capability and safety - especially if or until AI superpowers and their firms have not joined - is the ability to attract and retain top AI talent and experts. Talent attraction in AI is driven by compensation, social recognition and mission alignment and would need to ensure very high security and confidentiality. 

Staff will be paid at their current global market value, and their social importance will be highlighted. Member states will be mandated to support top-level recruitment and to enact laws that ensure that knowledge gained is not leaked. Staff selection and oversight procedures will exceed those of the most critical nuclear and bio-labs facilities in sophistication.

The unique mission and democratic nature of the Lab would likely have a strong chance of being perceived by most top global AI researchers, even in non-member states, as being ethically superior to others, akin to how Open AI originally, and Meta more recently, have attracted top talent to work with them, or for them, via claims of their "open-source" ethos. 

Just as OpenAI attracted top talent from Deepmind due to a mission and approach perceived as superior, and top talents from OpenAI went on to create Anthropic for the same reasons, the Lab should be able to attract top talents as the next "most ethical" AI project. Substantial risks of authoritarian political shifts in some AI superpowers, as warned (1.5 min video clip) by Joshua Bengio, could entice top talents to join the Global AI Lab to avoid their work being instrumental to an authoritarian regime. 

Public-Private Partnership Model

Participant AI labs would join as innovation and go-to-market partners, in a joint-venture or consortium controlled by the participant states. 

They will contribute their skills, workforce and part of their IP in such a way as to advance both their mission to benefit humanity, their stock valuations, and retain their agency to innovate at the root and application level, within safety bounds:

  • As innovation partners and IP providers, they would be compensated via revenue share, secured via long-term pre-licensing and pre-commercial procurement agreements from participating states and firms.

  • As go-to-market partners, they would gain permanent access to the core AI/AGI capabilities, infrastructure, services and IP developed by the Lab.

    • These will near-certainly far outcompete all others in capabilities and safety, and be unique in actual and perceived trustworthiness of their safety and accountability.

    • They would maintain the freedom to innovate at both the base and application layers, and retain their ability to offer their services to states, firms and consumers, within some limits.

  • Participant AI labs partnership terms will be designed so as to maximize the chances of a steady increase in their market valuation, in order to attract the participation of AI labs - such as Big Tech firms - that are governed by legal conventional US for-profit vehicles that legally mandate their CEOs to maximize shareholder value.

This setup will enable such labs to continue to innovate in capabilities and safety at the base and application layers but outside a “Wild West" race to the bottom among states and labs, advancing both mission and market valuation.

The Superintelligence Option

It is of great importance that nearly all leading US AI firms  - while acknowledging the real and huge human safety risks of AGI or Superintelligence - have publicly committed to pressing on to build it, asserting each that their specific approaches will maintain human control over these systems, and/or that their emergence is unlikely to be stopped. This raises the legitimate question of whether some of these AI labs might be comfortable with a significant risk of humanity losing control over AI, or even hiddenly rooting for it.

Their rationale is hinted at in recent interviews and publications. Some of them believe that it's too hard to stop all advanced private and state entities from pursuing it, and therefore, they should try to do their best to influence its nature, if at all possible. Perhaps, they also consider it probable or plausible that an AI takeover may cause a good or great case scenario for humanity or valuable future life forms.

Calls for a global AGI lab and global democratic governance by top US labs and NGOs

Participation could be extended to AI labs from states that may initially not be member states of the new organization, such as one or more AI superpowers. 

While governments have shown reluctance, leading AI labs - which stand to lose the most from stringent or global regulations - have been the most outspoken about both the "catastrophic safety risks" and the necessity for global governance. Some have even presented highly detailed proposals, while others have made explicit calls for democratic and participatory frameworks.

Perhaps due to their substantial global lead over their competitors in other states, several US leading AI labs have called for one or more of the following: (1) enforce a global cap and ban on dangerous AI developments, (2) reduce the "race to the bottom on safety" (3) insurance of more time, resources and coordination in tackling the technical alignment problem; and (4) solve via globally democratic governance the governance alignment problem.

There is a strong awareness of the catastrophic safety risks of a global AI arms race among half of the top US AI labs and top Chinese and US AI scientists. The CEOs of Deepmind, OpenAI and Anthropic, and 2 of the three "godfathers of AI", signed with many others a letter in May 2023 stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

In a development largely overlooked by mainstream media, Sam Altman, the CEO of OpenAI, which developed ChatGPT, last March called for a global democratic constitutional convention similar to the 1787 U.S. Constitutional Convention. The aim would be to establish a global federal governance system for AI based on the principle of subsidiarity. He repeated calls for highly participatory, federal and empowered global governance of AI in an interview, and then another. He even pledged to transform OpenAI's current governance structure in a globally democratic and participatory body, stating that OpenAI should not be trusted if its 7-person board fails to enact soon such a transfer of power.

Last November 17th, 2023, Altman was fired by the OpenAI board. After 747 of 770 of its employees requested him as CEO, the board was forced to resign. While many read the events as a sign that OpenAI and Altman are beholden by profit motives and pressures by Microsoft and investors, we believe it arguably resulted in a much-increased de-facto power to shape OpenAI's future governance. 

In a December 7th audio interview he gave in 2023 to Trevor Noah, Altman acknowledged as much and stated those same intentions for the future governance of OpenAI, with an even more explicit call for a role of governments. Again, on December 13th, he reiterated in an interview (1-minute video clip) how people should not trust OpenAI unless its governance does not democratize.

Similarly, Dario Amodei, the CEO of Anthropic, one of the top 5 AI firms in the world, loudly warned of significant and near-term catastrophic risks, and called for strong democratic global governance of AI, and suggested that their planned controlling non-profit steering board should be absorbed or replaced at some point by a global democratic body. 

OpenAI's Chief Scientist Ilya Sutskever stated in minutes 9.51-10.43 of this documentary: "If you have an arms race dynamic between multiple kings trying to build the AGI first, they will have less time to make sure that the AGI that they will build will care deeply for humans." ... "Given these kinds of concerns it will be important that AGI is somehow built as a cooperation between multiple countries. The future is going to be good for AI regardless. Would be nice if it were good for humans as well.". Last October, he told an MIT magazine interviewer that his main professional priority had changed to figuring out "how to stop an artificial superintelligence from going rogue". 

Anthropic's CEO Dario Amodei, suggested in a recent interview (from 1.46.07 to 1.49.00) that, to reduce safety risks and the ongoing arms race, the development of advanced AI may have better been developed by large open intergovernmental consortia, like those that have pull tens of billions of resources together to build share large telescopes or shared atomic particle accelerators.

Last July 11th, Google DeepMind took a significant step by publishing a paper in collaboration with top AI scientists, introduced via a blog post, detailing very detailed "exploration" of the feasibility of creating  4 new IGOs, one of which is the Frontier AI Collaborative, an "international public-private partnership" to "develop and distribute cutting- edge AI systems, or to ensure such technologies are accessible to a broad international coalition." While the depth and scope of this paper deserve immense praise for elevating the level of discourse, it allocates a disproportionately large role to the U.S. and the U.K., and leading AI companies. This is justified based on their expertise, the need for quick action, and the urgency of certain risks.

Researchers at NGOs, such as research and advocacy institutes like the Future of Humanity Institute, its spinoff Center for the Governance of AI, and the Future of Life Institute, have published numerous papers exploring the feasibility, complexities, and historical precedents for ambitious global AI governance of dangerous technologies. However, none have yet produced detailed proposals for such governance and processes leading up to them.

​​Two weeks before DeepMind's proposal, the Trustless Computing Association presented version 1.0 of this paper, the Harnessing AI Risk Proposal v.1, detailing a proposal for  the establishment of three new IGOs for wholly managed AI, including the Trustless Computing Certification Body and Seevik Net Initiative for more trustworthy and widely trusted governance-support systems. It unveiled this proposal at a formal public event on June 28th at the UN in Geneva, organized by the Community of Democracies and attended by its 40 member states.

Other labs were less inclined to participatory global governance. Mustafa Suleyman, CEO of Inflection AI, has been extremely vocal about the need for some form of worldwide regulation. Still, he has de-emphasized the need for inclusivity, and described in an influential article with Ian Bremmer a key role of the US government and the top AI firms in designing and running such global institutions.  While the Chief Scientist of Microsoft, Eric Joel Horvitz, signed a statement on AI risks, its president, Brad Smith stated that AI does not pose an existential threat, though warned of grave risks to safety, and called for more regulation, without specific reference to global regulation or new global institutions.  Meanwhile, Meta - the mother company of Facebook, WhatsApp, and Instagram - has taken a very skeptical stance so far, with its Chief AI Scientist calling the warnings of AI existential risk "preposterously ridiculous".

Hence, not only the Lab could attract many top AI talents based on its superior mission, but it could also attract close collaboration or full participation by some leading US AI Labs and other states.

In addition, substantial risks of near-term authoritarian political shifts in AI superpowers, as warned (1.5 min video clip) by Yoshua Bengio, could further entice top US AI labs to "internationalize" their ventures to avoid the risk of falling largely or wholly under the control of an unreliable, undemocratic or authoritarian power in the near future. 

Opportunities

Find below detailed opportunities related to the Global Public Interest AI Lab and the Initiative for various entities: