How do we avoid the failure of the Baruch Plan?
Why the Baruch Plan failed is the subject of much academic analysis. Only 5 days after its presentation to the UN by Bernard Baruch, the Soviet Union counter-proposed their Gromyko Plan, involving full nuclear disarmament but retaining the veto. Maybe the proposal of the Baruch Plan was a sign of failure to agree in the months prior: it was just political posturing by the US and then by the Soviet Union following such failure. The Soviet Union was likely afraid it would have been in a minority against liberal/capitalist democracies in the proposed governance without a veto.
Perhaps, it was just too late. As WW2 was ending in favour of the allies in 1944, encouraged by Robert Oppenheimer, the leading atomic scientist Neils Bohr proposed in a letter to US President Roosvelt that the US and Russia should unify their nuclear weapons programs to avoid a post-war arms race and to raise their chances to beat in time Hitler. Roosevelt agreed and suggested the UK-based Bohr convince Churchill to do so. If Bohr and Roosvelt had succeeded then, an international control of atomic energy would have been more likely.
Whatever the reasons of the failure, it is a fact that both superpowers had an exclusive international control of dangerous nuclear technologies as their official foreign policy for years - and the rest of the world awaited eagerly for them to agree - but the result was a failure, an arms race, a cold war, and the nuclear risk being higher today than it ever was. Ands failure to establish a safe global governance model for catastrophically dangerous technologies.
The only certainty is that it was a treaty-making failure: the superpowers, permanent members of the security council and the rest of the states, failed to agree on a sane, safe and democratic global governance of an extremely dangerous technology. If we are alive today is by luck that future atomic science developments, while achieving 3000 times the power of Hiroshima bomb only 5 years later, still enabled a coordination of security agencies of powerful nations to save us to date from disasters from accidents, accidental confrontations, or proliferation.
For these reasons, we believe that realizing a “Baruch Plan for AI”, properly and in a timely manner, while avoiding the failures of the Baruch Plan, requires two priorities which should be advanced in parallel:
Ensure a much more global, inclusive, timely, participatory, and transparent, joint and global scientific and strategic analysis of advanced AI - including the risks, solutions, preparedness, and verification mechanisms - a sort of new version of the 1946 Acheson-Lilienthal Report, but this time globally democratic and for AI.
Ensure a treaty-making model that is much more effective, timely and democratic than the current ones, by adopting the intergovernmental constituent assembly model, inspired by great successes of the past.
1) A more Global, Timely, Participatory and Transparent Scientific and Strategic Analysis.
Unfortunately, the veto structure of the newly formed UN Security Council and US nuclear scientists (and state officials) ' inability to deeply coordinate with Russian ones ahead of time on the many and complex scientific and geo-strategic matters led to a US formal proposal without proper coordination with the Russians, which led to a counterproposal of a Gromyko Plan and a stalling of the negotiations.
Top AI scientists and diplomats should engage in a deeper, faster, more transparent and more global engagement than they did in 1945-46 for nuclear to draft a sort of “global Acheson-Lilienthal Report for AI,” including with China and Russia.
Such a document would be key to providing a globally trusted joint assessment of the risks and opportunities and of workable solutions to the highly complex technical, governance and treaty-making issues and mechanisms required. This should include deep national and global preparedness studies and plans to ensure suitable global coordination in the case of a sudden acceleration of capabilities or a major accident.
Great work has been done in this regard by the International Scientific Report on the Safety of Advanced AI, led by the UK and chaired by Yoshua Bengio, the Guidelines for Secure AI System Development, led by the NSA and GCHQ, and similar initiatives by China.
Recently, the Carnegie Endowment for International Peace and Oxford Martin AI Governance Initiative, with their The Future of International Scientific Assessments of AI’s Risks, have articulated well-articulated suggestions on how to make this work more global.
Yet many of the most authoritative AI scientists - such as Yoshua Bengio, Paul Christiano, and Ian Hogart - have assumed crucial roles in the work of key AI states. They are hardly in a position to push the envelope too much for a more global approach to AI governance if that conflicts with their current agendas.
The participation of national security agencies is paramount, given their unique expertise and authority in public safety and national security risks. We must be immensely thankful to them for filling in for their political leaders’ failure to agree on a sane global governance of atomic energy by averting major nuclear catastrophes to date - albeit at a great cost to civil liberties worldwide. Their role this time should be central and, to a large extent, transparent and public, as much as possible, to ensure democratic accountability.
2) An Intergovernmental Constituent Assembly Model of Treaty-Making
Current dominant treaty-making models and processes have been broken for a very long time, with repeated failures, unending negotiations, and, at best, resulting in very weak treaties that failed to deliver on their promise and were often abandoned later by states without accountability.
To avoid the failure of the Baruch Plan, avert a veto-based deadlock, and ensure timeliness, effectiveness and participation, the treaty-making process we envision is based on the intergovernmental constituent assembly model, such as the one pioneered in 1786 when one US state convened five more in the Annapolis Convention, ultimately culminating in the ratification of the US federal constitution in 1787.
We believe that by pursuing a more effective, timely and democratic diplomatic negotiation and treaty-making process—via the open intergovernmental constituent assembly model—without any state's veto, we could eventually bring a wide majority of states and superpowers to agree on such a treaty.
Such a model was pioneered with great success in 1786 when two US states set out a process that started by convening the Annapolis Convention and culminated in the approval of the US federal constitution by a simple majority and its ratification by the required nine and then all 13 US states.
An open coalition of a critical mass of states, supported by an open Coalition for a Baruch Plan of AI made of NGOs, could advance such a treaty-making process to create an intergovernmental organization called the International AI Development Authority. An open, multinational consortium could also be designed to develop and share the most advanced, safe AI to incentivize further early joining nations. As in 1946, we believe the current superpowers will soon join such an initiative.
This plan is fraught with risks of co-option by some actors, resulting in a strong undemocratic global governance or generating conflicts, so it must be carefully designed and managed. Yet, now is the time to bring it forward and start building it before it is too late, as the immense risks to human safety and unaccountable concentration of power increase at a blistering pace.
While his ideas seem to have changed, it is noteworthy that Sam Altman suggested in an interview last March 2023 that the intergovernmental constituent assembly process that led to the US Constitutional Convention of 1787 should be the "platonic ideal" of the treaty-making process we need to build proper global governance for AI.
Read More
Read more about our rationale and plans refer to
our late-draft Case for a Coalition for a Baruch Plan for AI (v.1.0).
Join Us
We invite you to join us: