How do we avoid the failure of the Baruch Plan?

That is a very hard and crucial question whose answer will be a core focus of the early phases of our Coalition.

At this stage, we believe that realizing a “Baruch Plan for AI”, properly and in a timely manner, while avoiding the failures of the Baruch Plan requires two priorities which should be advanced in parallel:

  1. Advance a much more global, inclusive, timely, participatory, and transparent, joint and global scientific and strategic analysis of advanced AI—including the risks, solutions, preparedness, and verification mechanism—a sort of new version of the 1946 Acheson-Lilienthal Report, but this time globally democratic and for AI.

  2. Ensure a treaty-making model that is much more effective, timely and democratic than the current ones, by adopting the intergovernmental constituent assembly model, inspired by great successes of the past.

1) A more Global, Timely, Participatory and Transparent Scientific and Strategic Analysis. 

Unfortunately, the veto structure of the newly formed UN Security Council and US nuclear scientists (and state officials) ' inability to deeply coordinate with Russian ones ahead of time on the many and complex scientific and geo-strategic matters led to a US formal proposal without proper coordination with the Russians, which led to a counterproposal of a Gromyko Plan and a stalling of the negotiations.

Top AI scientists and diplomats should engage in a deeper, faster, more transparent and more global engagement than they did in 1945-46 for nuclear to draft a sort of “global Acheson-Lilienthal Report for AI,” including with China and Russia. 

Such a document would be key to providing a globally trusted joint assessment of the risks and opportunities and of workable solutions to the highly complex technical, governance and treaty-making issues and mechanisms required. This should include deep national and global preparedness studies and plans to ensure suitable global coordination in the case of a sudden acceleration of capabilities or a major accident.

Great work has been done in this regard by the International Scientific Report on the Safety of Advanced AI, led by the UK and chaired by Yoshua Bengio, the Guidelines for Secure AI System Development, led by the NSA and GCHQ, and similar initiatives by China. 

Recently, the Carnegie Endowment for International Peace and Oxford Martin AI Governance Initiative, with their The Future of International Scientific Assessments of AI’s Risks, have articulated well-articulated suggestions on how to make this work more global.

Yet many of the most authoritative AI scientists - such as Yoshua Bengio, Paul Christiano, and Ian Hogart - have assumed crucial roles in the work of key AI states. They are hardly in a position to push the envelope too much for a more global approach to AI governance if that conflicts with their current agendas.

The participation of national security agencies is paramount, given their unique expertise and authority in public safety and national security risks. We must be immensely thankful to them for filling in for their political leaders’ failure to agree on a sane global governance of atomic energy by averting major nuclear catastrophes to date - albeit at a great cost to civil liberties worldwide. Their role this time should be central and, to a large extent, transparent and public, as much as possible, to ensure democratic accountability.

2) The Intergovernmental Constituent Assembly Model of Treaty-Making

Current dominant treaty-making models and processes have been broken for a very long time, with repeated failures, unending negotiations, and, at best, resulting in very weak treaties that failed to deliver on their promise and were often abandoned later by states without accountability.

To avoid the failure of the Baruch Plan, avert a veto-based deadlock, and ensure timeliness, effectiveness and participation, the treaty-making process we envision is based on the intergovernmental constituent assembly model, such as the one pioneered in 1786 when one US state convened five more in the Annapolis Convention, ultimately culminating in the ratification of the US federal constitution in 1787. 

We believe that by pursuing a more effective, timely and democratic diplomatic negotiation and treaty-making process—via the open intergovernmental constituent assembly model—without any state's veto, we could eventually bring a wide majority of states and superpowers to agree on such a treaty. 

Such a model was pioneered with great success in 1786 when two US states set out a process that started by convening the Annapolis Convention and culminated in the approval of the US federal constitution by a simple majority and its ratification by the required nine and then all 13 US states.

An open coalition of a critical mass of states, supported by an open Coalition for a Baruch Plan of AI made of NGOs, could advance such a treaty-making process to create an intergovernmental organization called the International AI Development Authority. An open, multinational consortium could also be designed to develop and share the most advanced, safe AI to incentivize further early joining nations. As in 1946, we believe the current superpowers will soon join such an initiative. 

This plan is fraught with risks of co-option by some actors, resulting in a strong undemocratic global governance or generating conflicts, so it must be carefully designed and managed. Yet, now is the time to bring it forward and start building it before it is too late, as the immense risks to human safety and unaccountable concentration of power increase at a blistering pace.

While his ideas seem to have changed, it is noteworthy that Sam Altman suggested in an interview last March 2023 that the intergovernmental constituent assembly process that led to the US Constitutional Convention of 1787 should be the "platonic ideal" of the treaty-making process we need to build proper global governance for AI.