From a Mad Race to Superintelligence to a Global Treaty for Safe AI for All

Earlier this week, OpenAI, Oracle Softbank, and the United Arab Emirates announced, on stage with Trump next to them, a new venture that will invest $500 billion to build Superintelligence.  

This is not a one-off statement by any means. Recent statements by Elon Musk, by OpenAI's CEO Sam Altman, and by Ilya Sursekever’s Safe Superintelligence also clarified they are aiming for Superintelligence. NVIDIA's CEO aims to do essentially the same, using AI ever more extensively to improve its AIs, stating that he seeks to "turn NVIDIA into one giant AI" while Microsoft and Meta are less explicitly aligned in the same race. This year alone, around $200 billion has been committed to expanding and advancing AI capability towards ever more powerful AGI and Superintelligence.

Unlike the term Artificial General Intelligence (or AGI), which can legitimately be defined from AIs that we have today to AIs that match or exceed all human cognitive abilities - which may never happen - Superintelligence (or ASI) is a very precisely defined term. 

Superintelligence describes an AI capable of improving itself at an ever-accelerating rate. Almost by definition, it means an AI far beyond any human's control, improving and expanding at a mind-boggling rate, which leads inexorably to an AI takeover of the world. 

Nearly all mainstream media clearly do not understand the term's meaning, confusing it with AGI, while many industry leaders - who know its meaning very well - partake increasingly in such confusion, possibly to forestall a public pushback.

This race to Superintelligence, as it is evident to even non-experts made privy to its definition, constitutes a direct commitment to engage in the riskiest endeavor and greatest gamble that humanity has ever conceived. It is hard to overstate how crazy and dangerous of a predicament we are living in. 

How can those CEOs be so irresponsible? Are they crazy or evil? Not necessarily. While they were the earliest and loudest in calling for the immense safety risks and need for regulation, in 2023 and early 2024, they have become so skeptical due to the shocking inaction of states and AI advances, that they think ASI cannot be stopped anymore. Musk stated as much, claiming that superintelligence cannot be prevented anymore and that his "recipe" is to make it "maximally truth-seeking."  

Sure, it is possible that an ASI would, for some reasons, act durably beneficially towards us (possibly also due to some of its initial designs), even though it is likely to regard us as we regard ants. It is possible that it'll turn out to be fantastic for humans, even if we’d be ruled over by a new, more powerful digital species. ASI could even find technical and governance solutions to other existential challenges, both man-made and natural ones. 

Yet the risks of dystopias and human extinction by AI are just so great and uncertain that this gamble is crazy. 

Given the immense geopolitical and market forces fueling this mad race at play and short timelines, many have concluded that there is nothing we can do anymore, that we can just hope for the best, or hope that the initial design of an ASI can durably maximize our chances of a good outcome. 

Yet, we may still have time for China and the US—necessarily together with a good number of the most powerful states—to agree on and enforce (in a very short time) an extraordinarily bold treaty that can reliably implement global control of all dangerous AI research, development and use and share (at least somewhat equitably) among states and world citizens the benefits and power that advanced safe AI will generate. 

But we can’t rely on treaty-making as usual, as failure would be guaranteed, as seen in treaty negotiations for climate change and nuclear ever since 1946 - and time is extremely short. Until a China and US agreement is finalized and made public, we could not know anything about its actual process, and then other states refuse to comply with the extremely strict bans and oversight regime that will be required. 

We must rely instead on the historically proven model of the intergovernmental constituent assembly (as used to create the US and Swiss federations), except adapted for "geopolitical realism" to give China, the US and other powerful states more decision-making power in the treaty-making process and more power and wealth in the sharing of AI benefits.

Trump would make himself and America much greater than they are today while avoiding an immense risk to the lives of all Americans, his own and his kids. Meanwhile, President Qi would vastly advance China's well-being, progress, safety and social harmony. 

If they succeed, both leaders will go down in history as the greatest statesmen ever, not only for having prevented the greatest risk humanity has ever faced but also for harnessing it to establish a durable global governance organization that guarantees to expand well-being, safety and abundance via ever-more advanced, safe AI.

We have very little time to try to convince the closest advisor and public officials of President Trump and President Xi Jimping, and other leading heads of state, and leaders of AI labs to work decisively towards a bold and skillful global agreement to reliably turn AI into humanity's greatest invention.

Such is the vision behind the Coalition for a Baruch Plan for AI, which was launched last December. If you agree with it, please consider joining us in some form - or supporting us with some donation as we have a hard $30,000 fundraising deadline coming up on February 10th, 2025.

(here social post on this blog post on X and on Linkedin)

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Next
Next

Open Letter for South Africa to lead a strong Global Governance of AI and a Global AI Lab