The AI Act and Beyond: EU's Ambitions and Obstacles in the AI Race.

Last Friday, the EU approved the AI Act through a great effort by three co-legislative branches and an exemplary demonstration of participatory transnational democratic negotiation. 

The EU institutions did all they could, considering the EU's lack of mandate for national security matters, and its structural over-reliance on unanimity in decision-making which makes it highly subject to undue internal and external pressures.

But all they could do is far from what is needed and what EU member states could pursue.

The negotiations on the EU AI Act couldn't even discuss or acknowledge the primary challenges posed by AI, that of immense human safety risks due to misuse and accidents, "loss of human control," and that of an extreme concentration of power and wealth in a handful of globally-unaccountable states and firms. Nor could they really face the huge sovereignty and civil rights risks, nor ensure EU stands a fair chance to lead or compete globally,  and benefit economically.

In fact, given the general purpose and dual-use nature of advanced AIs, the entities that will set and update global safety standards and their enforcement - on the basis of national security - will accrue lock-in advantages in the most capable "safe" AI capabilities. Similarly to the domain of encrypted communications, and unlike that of nuclear technology, the dominance and regulation of AI civilian use cannot be separated from that for national security.

The top four US AI labs - Google Deepmind, OpenAI, Meta and Anthropic - stated they are pursuing so-called AGI or superintelligence, each assuring us that their specific approach to safety and control will (likely) be safe, while that of others' would (more likely) lead to great human safety risks. While the US government implicitly supports those initiatives, it is implausible that those activities should be overseen for safety and security by the US security agencies.

The same three EU member states that shifted any AI Act discussion of the "how to regulate" the most advanced AIs (i.e., "foundational models") to an "if to regulate" them were the only ones that - together with "embattled" Estonia, Poland and Norway - joined another twelve US allies in underwriting the Guidelines on Secure AI Development, published two weeks ago by the US and UK national security agencies, NSA and GCHQ.

This situation, summed with the inability of the EU to foster globally competitive or sovereign capabilities in AI labs, AI chips and AI chip manufacturing - as it couldn't for cloud and social media - makes it very unlikely that the AI Act, together with its EuroHPC and AI4EU initiatives, is sufficient to realize the "launch pad for EU start-ups and researchers to lead the global AI race" as declared by the EU Commissioner Thierry Breton.

Meanwhile, leading EU states have failed to create truly competitive and native global industrial capabilities for AI - as they did for civil aviation with Airbus or for nuclear energy with the CERN - while France and Germany are moving along with radically under-funded initiatives, heavily reliant on US capital, shareholders, and critical suppliers.

How do we exit this very concerning situation? A critical mass of EU member states could join a critical mass of non-EU states to build both an International AI Safety Agency and a Global Public Benefit AI Lab, open to all states, AI labs and both AI superpowers to join, as proposed by our Harnessing AI Risk Initiative

By aggregating resources and markets - outside the constraints of unanimity decision-making and exclusion of national security - participating states would be able to truly promote and safeguard their citizens' and world citizens' safety, sovereignty, and economic well-being. 

The initiative, with a cost of at least $ 20 billion, could be self-sustaining, funded via project finance, and backed by participating states' pre-licensing and pre-commercial procurement contracts.

This way, pioneering EU member states, and eventually all of the EU, could play a key role in creating the global institutions that we need to stop the mad break-neck AI arms race among a few states and firms that have a great chance of leading humanity to catastrophe or global dictatorship - and realize instead the incredible positive potential of AI to usher humanity in an era of unprecedented abundance, health, safety, and wellbeing.

The initiative positions as a platform for convergence and widening of the governance initiatives put forth by AI superpowers, to realize the huge win-win potential of AI to benefit all enormously, if we can build a shared participatory global governance that can turn the current destructive AI arms races among competing states and AI labs into healthy co-opetition.

Rufo Guerreschi