Goals, Objectives, Activities and Roadmap

Goals

The Coalition for a Baruch Plan for AI seeks to aggregate top NGOs, experts, former public officials, and philanthropists to contribute as  much as we can to convince heads of state, security agencies, and key advisors of superpowers, powerful states, all states of the urgent need to do the following:

  1. Build a "Baruch Plan for AI", a global federal intergovernmental organization ("IGO") for AI that is powerful, effective, democratic, and timely enough to prevail over the key immense risks of AI - as it was envisioned for nuclear technology by the 1946 Baruch Plan - using whatever treaty-making process will succeed.

  2. Agree upon and jump-start a suitable treaty-making process to build such a  global IGO for AI that can be expected to be effective, timely, and inclusive enough to achieve its goals. It must be much earlier, broader, more inclusive, and high-bandwidth than the one pursued by the Baruch Plan back in 1946 - across military, scientific, and diplomatic domains, and using all diplomatic tracks - inspired by the intergovernmental constituent assembly model.

We are aware that the decisive and timely pursuit of such efforts is not only very difficult but also fraught with great risks of co-option by some actors, resulting in strong undemocratic global governance or generating conflicts, so it must be carefully designed and managed. 

Yet, now is the time to bring it forward and start building it, as it may already be too late, as the immense risks to human safety and unaccountable concentration of power increase at a blistering pace.

By coming together skillfully like never before, we can shape a safe and equitable AI future, and affirm a global governance model for other dangerous technologies, our global digital sphere, and other global challenges, for generations to come.

Objectives

Hence, the Coalition's effort will center on two objectives in equal priority:

Firstly, do all we can to convince heads of state and/or security agencies and leading AI firms of AI superpower states, China, and the US, especially Donald Trump and Xi Jinping and their closest advisors, of the following:

  1. The inexorability of the need to pursue and co-lead the creation of a global intergovernmental organization for AI similar in scale, scope, and democratic governance to that which the US proposed to the UN for nuclear technology in 1946 - of the Baruch Plan for AI - and the need to pursue it with the highest urgency, care, and dedication, and utilizing an effective treaty-making process. 

  2. The inescapable need to substantively and formally involve at least a good number of other powerful states in such treaty-making, to ensure the ability to enforce AI non-proliferation bans, and the need to utilize a much more effective treaty-making model, perhaps based on the intergovernmental constituent assembly model.

  3. For more, refer below to the chapter Open Letter to the Presidents of China and the United States

Secondly, try to convince heads of state, security agencies, and/or leading AI firms of all other non-superpower states to concurrently, decisively, and with the highest  urgency:

  1. Co-lead and foster a Baruch Plan of AI, by designing and jumpstarting its treaty-making process, and starting to design and build its AI safety, security, and compliance agencies, mechanisms, and technologies - if and while AI superpowers do not;

  2. Speedily build a global shared AI capability via an open, public-private, Global Public Benefit AI Lab, costing at least several hundred billion dollars, to build leading-edge safe AI capabilities - a sort of open, global "Airbus of AI". 

    1. The Lab will aim to achieve supply chain autonomy or, better, a sustainable "mutual dependency" with the US and/or China. 

    2. The Lab will be open to all states, as well as China and the US, but only together. 

    3. The Lab will ideally be jump-started by a critical mass of states, comprising a coherent combination of initial states that accrue rare or unique AI industrial assets, AI talent, strategic autonomy, AI expertise, GDP, sovereign funds investment capability, and population. For example, the initial cohort could include several EU member states and the EU (seeking to build digital autonomy from the US) including at least Germany and Netherlands, given their unique AI supply chain assets, and France, Italy, and/or Spain. At least one state should have advanced AI chip manufacturing capabilities, like South Korea. Regional Intergovernmental Organizations like the African Union could enable the onboarding of a large number of states with large populations that can become equal shareholders and clients of the Lab;  

  3. A critical mass of non-superpower states can lead the way toward a Baruch Plan for AI, while and if superpowers don't, serving as a bridge among them and a stimulus, and at once standing to gain massive economic benefits and political leverage derived from participating in such open Global Public Interest AI Lab.

  4. For more, refer below to the chapter Open Letter to the Head of State of Non-Superpower States.

Activities

To advance our Aims, we'll seek to "onboard" a good number of globally diverse leading experts and NGOs by having them join our Coalition in some form or participate in our activities, and energize them to contribute decisively. These will be expert and/or widely renowned in AI safety, global governance, or IT security. 

They will include top former diplomats, heads of state, and security officials, and progressively and increasingly current officials from relevant agencies will be included in the list in a few months.

We'll engage and onboard them via one or more of the following:

  1. Sign up for our Open Call for a Coalition for a Barich Plan for AI;

  2. Join as an Advisor, Volunteer, or NGO member of the Coalition;

  3. Leave a Testimonial;

  4. Contribute to reports and analysis, including a future version of this Case for a Coalition for a Baruch Plan for AI v.1, and its appendixes, to be expected every 4-6 months.

  5. Join us as Speaker at our virtual and hybrid events in Geneva and online, including the Harnessing AI Risk Summit in April 2025,  small virtual pre-summits, and future editions of the Summit;

  6. Donate to the Coalition;

  7. Contribute to the draft and evolving of ad-hoc cases for specific states, and specific offices or individuals in those states (also in the form of open letters and detailed custom documents);

  8. Directly reach out to entities that can reliably bring our Cases to the attention of the most powerful decision-makers in states, as identified above;

  9. Help to Indirectly bring our proposal to the attention of decision-makers via:

    • Coverage in mainstream media channels;

    • Effective short videos;

    • Social media and podcast presence and visibility.

Once we are funded, and in proportion to the funding we will receive, onboarding coordination and activation of onboarded entities will be power-charged by paid full-time or part-time staff.

Our activities as an NGO will primarily be aimed at directly promoting a dialogue among governmental entities (i.e. track 1.5 diplomacy)  and indirectly fostering treaty-making dialogue among governmental entities (i.e. track 1 diplomacy). We'll focus much less on fostering dialogues among civilian experts of different states (track 2 diplomacy), as is the focus of several other NGOs, and which we believe to be relatively less effective.  

The reason for that is that global AI safety governance has and will have such huge importance for national security and competitiveness that most of the knowledge necessary to advance an enforceable treaty is within current and former relevant security agency officials, largely classified, and simply not available to civilian AI experts, with the partial exception of top former technical official of leading AI labs.

Roadmap

First Stage

By December 2024:

  • Launched the Coalition on December 10th, with the concurrent publication of an Open Call for NGOs, experts, and citizens, a Commitment by States for a Baruch Plan for AI an extensive 70-plus page paper Case for a Coalition for a Baruch Plan for AI (v.1) and a short video, including detailed cases for states;

  • Onboarded more suitable persons and organizations to raise our profile, authority, and capability to reach out to states in the next months;

  • Reached out and engaged extensively with more potential donors, on the basis of our revised strategy and most recent Coalition documentation and media.

By January-February 2025:

  • Presented detailed and ad-hoc cases for a Baruch Plan for AI for the highest and most relevant officials and departments of the governments of China and the US, and the heads of the relevant agencies in security agencies, and engaged with top officials close to the decision-makers;

  • Engaged more extensively with leading representatives of China, the US, and other states, via calls or meetings, or their participation in our events;

  • Onboarded more suitable persons and organizations to raise our profile, authority, and capability to reach out to states in the next months;

  • Reached out and engaged extensively with more potential donors, on the basis of our revised strategy and most recent Coalition documentation and media;

  • Made the coalition more widely known and supported, possibly with mainstream media articles, interviews, podcasts, and videos of events.

By April 2025:

  • Engaged more extensively with leading representatives of China, the US, and other states, via calls or meetings, or their participation in our events;

  • The 1st Harnessing AI Risk Summit is held over 2 days in a neutral state or participating state (Geneva or New York). At least 3-7 states or regional IGOs, that are globally diverse including "1 key AI state or IGOs", including at least one medium-sized and one with rare or unique AI supply chain assets, have participated in the Summit in some form: 

    • Ideally, during or within 2 weeks after the Summit, they have stated the intention to join the next Summit and pre-summits, and actively participate in ongoing State Working Groups, including evolving drafts of the Mandate and Rules of the Open Transnational Constituent Assembly for AI. Have possibly undersigned ever more committing LoIs or MoUs;

  • Version 2 of the Case for a Coalition for a Baruch Plan for AI has been published;

  • Work is widely in place by our Working Groups and state representatives on new documents, and/or extensions and Appendixes of future versions of the Case, aimed at analyzing in depth the socio-technical and geopolitical challenges of successful global governance of the development and use of the most powerful AIs, to play a role similar to that of the Acheson-Lilienthal Report, largely written by Oppenheimer, in providing the scientific basis for the proposal of the Baruch Plan in 1946. (Such work is pursued via new documents and collaborations with leading NGOs, states, and national security agencies on international initiatives - such as the International Scientific Report on the Safety of Advanced AI and the Guidelines for Secure AI System Development led by the NSA and GCHQ);

  • States and possibly superpowers are increasingly engaged, developing a complete understanding of the AI situation, the risks and opportunities, a sense of ownership of the initiative, and getting down to the technical and treaty-making details.

By September 2025:

  • The 2nd Harnessing AI Risk Summit was held over 3 days in a neutral state or participating state (or Geneva or New York). At least 9-12 states or regional IGOs, that are globally diverse including "2 key AI states or IGOs", including at least two medium-sized and two with rare or unique AI supply chain assets, have participated in the Summit in some form; 

    • Ideally, during or within 2 weeks after the Summit, they have stated the intention to join the next Summit and pre-summits, and actively participate in ongoing State Working Groups, including evolving drafts of the Mandate and Rules of the Open Transnational Constituent Assembly for AI. Have possibly undersigned ever more committing LoIs or MoUs;

    • A Caucus for the Coalition has been established at the UN General Assembly; 

  • Version 3 of the Case for a Coalition for a Baruch Plan for AI has been published a few weeks in advance of the 2nd Summit.

Second Stage

The Second Stage will start when a highly globally diverse group of states -  making up at least 30% of the GDP and 30% of the world population, and including at least 2 states that are veto-holders of the UN Security Council or have nuclear weapons - have agreed on Mandate and Rules v.1 for the Assembly. At that stage, they will convene the Assembly with a date, place, and budget 6, 9, or 12 months later, where all states are invited to participate. 

Such Mandate and Rules should, we believe:

  • Be approved via simple majority based on weighted voting, after having extensively striven for consensus. Considering the vast disparity in power between states, particularly in AI and more broadly, and recognizing that three billion people are illiterate or lack internet access, we foresee the voting power in such assembly to be initially weighted by population and GDP;  

  • Have been widely perceived as fair, resilient, neutral, expert, and participatory, including via transnational citizens' assemblies, and a neutral and balanced mix of experts;  

  • Ensure statutory safeguards are in place to maximize the chances that the resulting IGO will not degenerate, unduly centralize too much power, or be captured by one or a few nations, and improve over time, such as via checks and balances, a solid federal structure, regular mandatory re-constituent assemblies, and other measures;

  • Ensure that the resulting charter is sent for ratification by the signing states, and becomes valid if 9/13th of them approve it, the same ratio used after the US Constitutional Convention of 1787;

  • Have established an open dialogue and convergence paths with competing or complementary international governance initiatives, especially those led by digital superpowers, and positioning itself as a medium-term complement for possible more urgent initiatives, albeit less multilateral, aimed to tackle urgent global safety risks;

  • By this time, or sooner, it is hoped that the US and China, other powerful states like Russia and Israel, and permanent members of the UN Security Council will all have joined. If China, the US, and most of those other states have joined by then, there can be enough positive and negative incentives exerted by participant states on non-joining states to keep their AI capabilities under reliable control (provided that AI supply chain control - and the public availability or hackability of dangerous AI technologies - have been mitigated early on via great powers coordination);

As decided, 6, 9, or 12 months later, the Open Transnational Constituent Assembly for AI and Digital Communications is held in Geneva. Once it is clear the new organization will be created, we believe it will be participated in by at least 60 states, which are highly globally diverse and make up at least 50% of the GDP and 50% of the world population, and include at least 3 states that are veto-holder of the UN Security Council or have nuclear weapons: 

  • States non-participating to the Assembly can ratify it as well;

  • Charter review provisions of the charter enable non-participating states to join in charter review conferences, initially held every 2 years, with equal rights;

Early ratifying states commit large enough funds to begin setting up the three agencies, including the Global Public Benefit AI Lab.