Only by Coming Together Like Never Before Can Humanity Ensure AI will Turn Out to be its Greatest Invention Rather than its Worst.

Coalition for a Baruch Plan for AI

Executive Summary


We stand on the brink of an irreversible slide towards AIs that are capable enough to subjugate or annihilate humanity, or enable a state or firm to do so.

While it is quite late, we can still prevail and realize AI's astounding opportunities, but only if we come together skillfully like never before.


While an AI-driven immense concentration of power and wealth is unfolding before our eyes, the leading US and Chinese AI scientists recently warned that catastrophic risks of AI for human safety could materialize in a few years or even "at any time".

While nearly all states are completely powerless on their own when facing these risks, the United States and its leading AI firms are increasingly aligned in a winner-take-all ideological AI arms race against China, pulling along allies and client states. China has been calling for global equitable AI coordination but, by not acting significantly to advance an alternative to the AI arms race, it is overall mostly and plausibly pursuing the same race and hegemonic dynamic.

Several firms and states are literally just one or two algorithmic innovations away from AI systems capable of scalable recursive self-improvement which in turn would start a so-called "Seed AI" leading almost inexorably to Superintelligence and loss of human control - if such an innovation happens to be as disruptive as the 2016 breakthrough in AI Transformers which led to our current AI paradigm. In fact, today there are at least 100 times more research resources dedicated to finding new AI algorithmic innovation as impactful as that was, and many lesser ones have been produced. 

On the other hand, if we stave off these risks, the opportunities for humanity are astounding. Although the window of opportunity to steer the current trajectory is rapidly closing, we may still have a chance to transform AI into Humanity's greatest invention rather than its worst. 

Given the United States' substantial lead on China on AI and the US political and institutional landscape after the last US presidential election, Donald Trump and his ally Elon Musk, along with President Xi, have accrued enormous power to shape the future of AI – and humanity. 

The uncomfortable truth is that nearly all other states are either (1) watching idly from the sidelines, (2) pretending to be building their own state-of-the-art sovereign AI with relatively minuscule investments, or, at best, (3) positioning and posturing as preferential digital/AI vassal clients states of one of the two superpowers, while being given small temporary political hand-outs, under an empty pretense of "alliance among equals". An international network of AI Safety Institutes, wholly centered on "interoperability" and knowledge exchange, and none at all the creation of a proportionate Global AI Safety Institute, not even a Western one, is useful but in essence an instrument of co-option and "kicking the can" tactics.

The Coalition for a Baruch Plan for AI believes that this shocking AI predicament requires we build a new global democratic organization that will ​​bring all potentially dangerous AI research, assets, arsenals facilities, and supply chains under exclusive international control while increasing the freedom of states, the private sector, and individuals and their innovation and oversight role. This is the scope of what President Truman - stirred by the looming awarenss of the immense risks of hydrogen bomb proliferation - proposed to the United Nations for nuclear technologies via his Baruch Plan on June 14th, 1946. 

In a remarkable coincidence, that may turn out to be much more than that, Bernard Baruch presented his Plan on President Truman's behalf just about one hour after the birth of Donald Trump.

Some of the most influential AI experts and leaders called for or referred to the Baruch Plan as a model for the global governance of AI - including Yoshua Bengio (the most cited AI scientist and the Turing Award Winner), Ian Hogarth (UK AI Safety Institute), Allan Dafoe (Global Head of AGI Strategy at Google DeepMind), Jack Clark (Global Head of Policy at Anthropic), Jaan Tallinn (Member of Board of Directors at Future of Life Institute), and Nick Bostrom (the leading AI philosopher and futurist).

We believe nothing less is up to the immensity of the challenge. Yet, this is never going to happen unless we convince China, the US, and a very large open coalition of diverse states to design and jump-start a treaty-making process that is radically more effective and on a larger scale than the predominant one, so as to avoid the utter failures of those for climate change and nuclear weapons, including the original Baruch Plan. 

We call on AI superpowers to engage in a highly effective, time-bound, whole-of-government, and "all tracks" treaty-making process to build such a Baruch Plan for AI in the shortest possible time, while taking every precaution - ideally preceded by bilateral emergency agreements to mitigate the most immediate risks as a temporary stop-gap solution. 

We call on non-superpower states to come together in a critical mass to (1) co-design and jump-start such a treaty-making process, acting as a catalyst and a bridge between the superpowers while concurrently (2) jump-starting with urgency a global "Airbus for AI," via a public-private global AI lab and ecosystem costing at least several hundred billion dollars to develop, regulate and exploit the most capable safe and controllable AI technologies without advancing the frontier of unsafe AI, open to all states and AI labs to join on equal terms.

Convenor and Founders

The Coalition was convened by the Trustless Computing Association in June 2024, and announced on September 10th, 2024, by its six founding NGO partners.

8-minute Coalition Video Explainer

Is it still possible or even plausible?

Awareness of AI’s immense safety risks has been mounting, and now is the time to turn it into action. Hundreds of AI scientists, including two of the three "godfathers of AI", stated last March that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Twenty-eight nation states, representing 80% of the world population, recognized AI safety risks in the Bletchley Declaration, including "loss of control". Over 55% of citizens surveyed in 12 countries were "fairly" or "very" worried about "loss of control over AI ''. At an invitation-only Yale CEO Summit in June 2023, 42% of CEOs surveyed said they believed AI has the potential to "destroy humanity within the next five to 10 years". According to surveys of thousands of AI scientists, going on with unregulated AI has a 1 in 6 chance of causing human extinction – the same odds of you, me, and our kids being killed by playing Russian Roulette. 

According to a recent survey, 77% of US voters support a comprehensive, strong international treaty for AI. Yet, so far, the United States and its leading AI firms are increasingly aligned in a winner-take-all ideological AI arms race against China, pulling along allies and client states. 

Meanwhile, China announced its Global AI Governance Initiative, which calls for a "United Nations framework to establish an international institution to govern AI" that will "ensure equal rights, equal opportunities, and equal rules for all countries in AI development and governance". However, no substantial action has followed in such a direction, while it has mostly pursued the same race and hegemonic dynamics.

While many are concerned about Donald Trump's unpredictability, instinctive attitude, strong economic nationalism, and strongman style, his recognition of the great risks of AI, and his non-ideological, transactional, and realist approach to foreign policy may turn out to be a great improvement for the prospects of sane global governance of AI over his predecessor Biden, who mostly framed the US’ AI global governance policy around an ideological AI race-to-the-brink of a western alliance against China. 

Most leading US AI labs, which last year loudly warned of safety risks and called for strong international governance of AI, have grown highly skeptical of states' actions and are ever more pressured by investors and the US government to align in its AI arms race. Hence, most labs, including Elon Musk’s xAI, are more or less overtly advancing full speed ahead to be those that will define the values of a god-like AI (Superintelligence) that they see as inevitable and unstoppable -  sacrificing most or all precautions in the process for fear that others beat them in the race.

Elon Musk was amongst the earliest and loudest warning us about immense AI safety risks since 2015. He recently stated that "we'll have AI smarter than any one human probably by the end of next year" (AI which could go on to scalably improve itself) and in October said that Superintelligence stands a 10-20% chance of killing everyone, and he said that it could not be prevented. He may, however, come to think that we can still prevent it while ensuring most of the benefits of AI by leveraging his new uniquely empowered role under the Trump presidency. 

While an urgent China-US bilateral treaty that can reliably turn this mad AI race into a safe, constructive one is highly desirable, it would not be sufficient to ensure worldwide AI safety compliance with the stringent bans that will be required. Many other powerful states - including the other three permanent members of the UN Security Council - would most surely oppose it because it would likely result in an immense concentration of power in those two states - unless it is very clearly and reliably framed as a temporary and stop-gap solution, while a much more inclusive and effective treaty is being built on a fast-track.

From Consensus to a Suitable Global Treaty on AI

Given the immense challenge of reliably enforcing a global AI anti-proliferation ban in the years to come, even if the US and China were fully committed, the treaty-making process would need to include at least a relatively large number of states

While consensus on the size, the degree of urgency of the risks, and the need for strong global coordination among state leaders is very widespread and rapidly growing, as occurred in 1945-46 for nuclear technologies, turning even a vast consensus into a suitable and timely treaty has extremely high chances of failure. 

That is because the predominant treaty-making model has proven to be utterly ineffective, as demonstrated by decades of failures in treaty negotiations on climate change, nuclear technologies, and the Baruch Plan itself.

Therefore, while we should do all we can to foster an urgent suitable treaty for AI in any way possible, including via China-US bilateral or "plurilateral" efforts, we should pursue in parallel a much more effective, resilient, high-bandwidth, time-bound, and inclusive treaty-making process.  A treaty-making process to build such a powerful global organization must be enacted with the greatest urgency and utmost care while maintaining freedom of initiative at the lowest possible level in states, citizens, and communities. 

We need something extreme, given the circumstances, but also something historically proven, as we only have one shot. We need a treaty-making process with a clear mandate for clear expectations and supermajority rules to prevent unanimity-driven negotiation from leading to a deadlock of intersecting vetoes. We also need a set start and end date to ensure the process does not get dragged on for years, which Humanity cannot afford. 

Fortunately, there is a proven treaty-making model that meets those requirements. The intergovernmental constituent assembly treaty-making model was pioneered with astounding success for the creation of the US federal constitution in 1787 and then many other successful initiatives. Security agencies and top scientists should have key roles in this process as the treaty details and its enforcement hinge hugely upon their unique expertise.

Globally and for AI, this model would foster a more inclusive and faster process than traditional treaty-making: a model best suited for long-term and urgent short-term treaties. 

As a critical mass of states joins, an assembly date would be set. All states can join on an equal basis, even a few weeks prior, or else join a treaty "charter review" held as soon as 1 year later. China and the US are, of course, very highly welcome together, yet the leadership of a critical mass of diverse states may lead the way first. A vote weighting based on population and GDP would ensure that superpowers and the largest and richest nations– in other words, the nations most capable of developing advanced AI– would have a moderately larger share of power and economic upside. Sure this is not perfect democracy, by "the perfect is the enemy of the much better", and even though the first US constitution only gave a vote to 1 in 8 adults it had it in itself to eventually extend that vote, as education and other factors progressed.

An Urgent Call to Action

While China and the US are increasingly investing in an AI "race to the bottom", with their leading firms, due to the unavoidable logic of arms-race dynamics, they can still escape this race and compete in a safe, responsible, and equitable "race to the top" by co-leading or joining an open Coalition for an open global treaty-making and engineering initiative for AI - of radically unprecedented scope, urgency and effectiveness - with the full-time involvement of many thousands of diplomats, national security officials, citizens assemblies leading firms representative and independent experts, full time, for months on end.

We call on superpowers' heads of state and their key advisors to make the co-design, initiation, and funding of such a treaty-making process their highest priority while also possibly advancing a stop-gap, dual-use, temporary treaty to cap the most urgent risks. 

We call all other states to take their destiny into their own hands, by gathering in a critical mass of states with the utmost urgency and commitment to co-design and launch such a treaty-making process, acting as a stimulus, a catalyst, and a bridge between superpowers. We concurrently urge them to build without hesitation a Global Public Benefit AI Lab and ecosystem to develop, regulate, and exploit the most capable safe AI technologies, and provide superpowers and firms a safe and profitable exit ramp from the ongoing reckless AI arms race. Funded more than any other lab via sovereign funds, it will be open to all states, both superpowers and private AI labs to join on equal terms.

We call on NGOs, experts, former diplomats and heads of state, IGOs, citizens, and philanthropists to join our open Coalition for a Baruch Plan for AI and to try to steer the public discourse and influence the top decision-makers and their key advisors as much as possible. 

Given that superpowers are deadlocked, and other states are hesitant to take the lead, each locked into complex geopolitical constraints, enlightened philanthropists and NGOs can and should play a unique, vital, and historical role.

The activities of such an open coalition should primarily focus on a wide host of activities aimed at convincing key advisors of the heads of state and superpowers to pursue the above. In addition, it will (1) directly promote constructive, project-based dialogue among governmental and non-governmental entities, or so-called "track 1.5 diplomacy", and indirectly foster treaty-making dialogue, or so-called "track 1 diplomacy". 

The focus will be less on dialogues among civilian experts and NGOs, so-called "track 2 diplomacy", due to the fact that: (1) many NGOs are already doing a great job there; (2) knowledge crucial for advancing an enforceable treaty lies within current and former security agency officials and is largely classified - as well as in a few top technical staff of leading AI labs - and (3) it will be increasingly unknowable to what extent participants to such dialogues will be independent or not, given the stakes.

From Human to Human

As citizens, NGOs, and state leaders, we must do everything we can to help build such an organization. We must do it for ourselves, our children, and future generations.

As daunting and improbable as this task may be, if we manage to face it head-on with clarity and fearlessness, and resolve the heart-stopping enormity of these challenges, we have a chance to realize a radical and durable improvement in human well-being for generations to come.

If we succeed, ever more advanced safe AI will bring extraordinary and unprecedented benefits to humanity. The precedent of a successful, sweeping democratic and intergovernmental AI treaty will also establish an extensible model to combat other civilizational risks, such as nuclear war and climate change, without the risk of authoritarian lock-in.

The challenge is enormous, and success is highly uncertain. It may be tempting to succumb to powerlessness, to stick our head in the sand, or to watch doom unfold from the sidelines. 

But how can we find true peace or look our children in the eyes if we do not at least try to do something to save their future? We have the unique privilege of having agency in the most consequential years of human history. 

After all, is anything more exhilarating and fulfilling than striving with good spirits and firm resolve to steer the course of history toward a positive future?

Join us at the Coalition for a Baruch Plan for AI to strive with joy, enthusiasm, and gratitude in Humanity's greatest challenge: turning AI into humanity's greatest invention!

Read More and Join Us

If you agree with the above, please sign our Open Call for a Coalition for a Baruch Plan for AI and read our 90-page Case for a Coalition for a Baruch Plan for AI (v.1) , both published on December 18th, 2024.

We invite you to join us as: