Goals, Objectives, Planned Activities and Roadmap

(Updated on January 16th,
this page is under review till February 20th,
following our funding awarded on Feb 5th
by the Survival and Flourishing Fund)

Goals

The Coalition seeks to aggregate top NGOs, experts, former public officials, and philanthropists to contribute as much as we can to convince heads of state, security agencies, and key advisors of superpowers, powerful states, and all states of the urgent need to do the following:

  1. Build a "Baruch Plan for AI," a global federal intergovernmental organization ("IGO") for AI that is powerful, effective, democratic, and timely enough to prevail over the key immense risks of AI - inspired by the scope of that which was envisioned for nuclear technology by the 1946 Baruch Plan - using whatever treaty-making process will succeed.

  2. Agree upon and jump-start a suitable treaty-making process to build such a  global IGO for AI that can be expected to be effective, timely, and inclusive enough to achieve its goals. It must be much earlier, broader, more inclusive, and high-bandwidth than the one pursued by the Baruch Plan back in 1946 - across military, scientific, and diplomatic domains, and using all diplomatic tracks - inspired by the intergovernmental constituent assembly model.

We are aware that the decisive and timely pursuit of such efforts is not only very difficult but also fraught with great risks of co-option by some actors, resulting in strong undemocratic global governance or generating conflicts, so it must be carefully designed and managed. 

Yet, now is the time to bring it forward and start building it, and we should aim to build it on a fast track of 1-2 years maximum, as it may already be too late, as the immense risks to human safety and unaccountable concentration of power increase at a blistering pace.

By coming together skillfully like never before, we can shape a safe and equitable AI future and affirm a global governance model for other dangerous technologies, our global digital sphere, and other global challenges for generations to come.

Objectives

Hence, the Coalition's effort will center on two objectives in equal priority:

Firstly, do all we can to convince heads of state and/or security agencies and leading AI firms of AI superpower states, China, and the US, especially Donald Trump and Xi Jinping and their closest advisors, of the following:

  1. The inexorability of the need to pursue and co-lead the creation of a global intergovernmental organization for AI similar in scale, scope, and democratic governance to that which the US proposed to the UN for nuclear technology in 1946 - of the Baruch Plan for AI - and the need to pursue it with the highest urgency, care, and dedication, and utilizing an effective treaty-making process. 

  2. The inescapable need to substantively and formally involve at least a good number of other powerful states in such treaty-making to ensure the ability to enforce AI non-proliferation bans and the need to utilize a much more effective treaty-making model, perhaps based on the intergovernmental constituent assembly model.

  3. For more, refer to our Open Letter to the Presidents of China and the United States.

Secondly, try to convince heads of state, security agencies, and/or leading AI firms of all other non-superpower states to concurrently, decisively, and with the highest  urgency:

  1. Co-lead and foster a Baruch Plan of AI by designing and jumpstarting its treaty-making process and starting to design and build its AI safety, security, and compliance agencies, mechanisms, and technologies - if and while AI superpowers do not;

  2. Speedily build a global shared AI capability via an open, public-private, Global Public Benefit AI Lab, costing at least several hundred billion dollars, to build leading-edge safe AI capabilities - a sort of open, global "Airbus of AI.

    1. The Lab will aim to achieve supply chain autonomy or, better, a sustainable "mutual dependency" with the US and/or China. 

    2. The lab will be open to all states, including China and the US, but only together. 

    3. The Lab will ideally be jump-started by a critical mass of states, comprising a coherent combination of initial states that accrue rare or unique AI industrial assets, AI talent, strategic autonomy, AI expertise, GDP, sovereign funds investment capability, and population.

      1. For example, the initial cohort could include several EU member states and the EU (seeking to build digital autonomy from the US), including Germany and/or the Netherlands, given their unique AI supply chain assets, and France, Italy, and/or Spain.

      2. At least one state, such as South Korea, should have advanced AI chip manufacturing capabilities. Regional Intergovernmental Organizations like the African Union could enable the onboarding of a large number of states with large populations that can become equal shareholders and clients of the Lab;  

  3. A critical mass of non-superpower states can lead the way toward a Baruch Plan for AI, while and if superpowers don't, serving as a bridge among them and a stimulus, and at once standing to gain massive economic benefits and political leverage derived from participating in such open Global Public Interest AI Lab.

  4. For more information, refer to our Open Letter to the Head of State of Non-Superpower States.

Planned Activities

To advance our Aims, we'll seek to "onboard" a good number of globally diverse leading experts and NGOs by having them join our Coalition in some form or participate in our activities and energize them to contribute decisively. These will be expert and/or widely renowned in AI safety, global governance, or IT security. 

They will include top former diplomats, heads of state, and security officials, and progressively and increasingly current officials from relevant agencies will be included in the list in a few months.

We'll engage and onboard them via one or more of the following:

  1. Sign up for our Open Call for a Coalition for a Baruch Plan for AI;

  2. Join as an Advisor, Volunteer, or NGO member of the Coalition;

  3. Leave a Testimonial;

  4. Contribute to reports and analysis, including a future version of this Case for a Coalition for a Baruch Plan for AI (v.1) PDF and its appendixes, to be expected every 4-6 months.

  5. Join us as Speaker at our virtual and hybrid events in Geneva and online, including the Harnessing AI Risk Summit in Spring 2025,  small virtual pre-summits, and future editions of the Summit;

  6. Donate to the Coalition;

  7. Contribute to the draft and evolving of ad-hoc cases for specific states and specific offices or individuals in those states (also in the form of open letters and detailed custom documents);

  8. Directly reach out to entities that can reliably bring our Cases to the attention of the most powerful decision-makers in states, as identified above;

  9. Help to Indirectly bring our proposal to the attention of decision-makers via:

    • Coverage in mainstream media channels;

    • Effective short videos;

    • Social media and podcast presence and visibility.

Once we are funded, and in proportion to the funding we will receive, onboarding coordination and activation of onboarded entities will be power-charged by paid full-time or part-time staff.

Our activities as an NGO will primarily be aimed at directly promoting a dialogue among governmental entities (i.e., track 1.5 diplomacy) and indirectly fostering treaty-making dialogue among governmental entities (i.e., track 1 diplomacy). We'll focus much less on fostering dialogues among civilian experts of different states (track 2 diplomacy), as is the focus of several other NGOs, and which we believe to be relatively less effective.  

The reason for that is that global AI safety governance has and will have such huge importance for national security and competitiveness that most of the knowledge necessary to advance an enforceable treaty is within current and former relevant security agency officials, largely classified, and simply not available to civilian AI experts, with the partial exception of top former technical official of leading AI labs.

Roadmap

(This Roadmap below has been extensively updated on January 16th)

First Stage

Until February 10th,2025:

  • Focus 90%+ on fundraising. To attract funding, we’ll leverage the exceptional team, advisors and partners we have accrued, and the Open Call for a Coalition for a Baruch Plan for AI, and the 90-page Case for a Coalition for a Baruch Plan for AI (v.1) PDF that they produced.

    • Secure at least $3000/month in recurring donations or $3’,000 of cumulative one-time donations. (Given the inability of key team members and volunteers to sustain their work, If such a target is not reached, the Coalition will be set on standby until when and if such a target is reached).

  • Made the coalition more widely known and validated to attract more experts, NGOs, and states, and especially donors, via mainstream and specialized media articles and op-eds, or by appearing as guests in relevant podcasts and events.

By March 2025:

Assuming minimum funding has been secured:

  • Presented detailed and ad-hoc cases for a Baruch Plan for AI for the highest and most relevant advisors to the presidents of China, the US, and selected states and IGO, and relevant heads of department of their security agencies - and seek to engage them via email, calls, open letters, video calls, 1to1 meetings and hybrid or virtual events.

  • Explore the possibility of using the South African Presidency of the 2025 G20 Summit as a key platform for advancing our Coalition by holding with them our 1st Harnessing AI Risk Summit in hybrid form in the Spring of 2025 in Geneva (or possibly in South Africa). Explore other intergovernmental forums as well. Here is a Jan 8th blog post making a case for South Africa, including interest we already have from South African entities.

  • Reach out to key current and former officials of the NSA - including Mr Luber, Director of the NSA Cybersecurity Directorate (in charge of AI safety) his predecessors Sager and Plunkett, and the former NSA director Mike Rogers - and to the new nominated Director of National Intelligence, Tulsi Gabbard, via two Coalition’s advisors who are former NSA. The NSA Cybersecurity Directorate has published last April 2024 NSA document called Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems. which "builds upon the previously released Guidelines for Secure AI System Development". We prominently referred to such Guidelines (mostly ignored by the media) even in our Coalition Announcement of last September 10th, and then in all our follow up documents. Seeking to do the same on the Chinese side.

  • Onboarded more suitable persons and organizations to raise our profile, authority, and capability to engage decision-makers in states in the following months;

  • Made the coalition more widely known and validated to attract more experts, NGOs, and states, and especially donors, via mainstream and specialized media articles and op-eds, or by appearing as guests in relevant podcasts and events.

By May 2025:

  • Engaged more extensively with leading representatives of China, the US, and other states via email, calls, open letters, video calls, 1to1 meetings and hybrid or virtual events.

  • The 1st Harnessing AI Risk Summit is held over 2 days in a neutral state or participating state (Geneva or New York). At least 3-7 states or regional IGOs that are globally diverse, including "1 key AI state or IGOs", and at least one medium-sized and one with rare or unique AI supply chain assets, have participated in the Summit in some form: 

    • Ideally, during or within 2 weeks after the Summit, they have stated the intention to join the next Summit and pre-summits and actively participate in ongoing State Working Groups, including evolving drafts of the Mandate and Rules of the Open Transnational Constituent Assembly for AI. Have possibly undersigned ever more committing LoIs or MoUs;

  • Version 2 of the 90-page Case for a Coalition for a Baruch Plan for AI (v.1) PDF has been published;

By July 2025:

  • States and possibly superpowers are increasingly engaged, developing a complete understanding of the AI situation, the risks and opportunities, a sense of ownership of the initiative, and getting down to the technical and treaty-making details.

  • Work is widely involving our advisors and relevant state representatives on new documents, and/or extensions and Appendixes of future versions of the Case, aimed at analyzing in depth the socio-technical and geopolitical challenges of successful global governance of the research and use most advanced AI.

By September 2025:

  • The 2nd Harnessing AI Risk Summit was held over 3 days in a neutral state or participating state (or Geneva or New York). At least 9-12 states or regional IGOs that are globally diverse, including "2 key AI states or IGOs", including at least two medium-sized and two with rare or unique AI supply chain assets, have participated in the Summit in some form. 

    • Ideally, during or within 2 weeks after the Summit, they have stated the intention to join the next Summit and pre-summits and actively participate in ongoing State Working Groups, including evolving drafts of the Mandate and Rules of the Open Transnational Constituent Assembly for AI. Have possibly undersigned ever more committing LoIs or MoUs;

  • A Caucus of participating states for the Coalition, possibly under a different name, has been established at the UN General Assembly; 

  • Version 2 of the Case for a Coalition for a Baruch Plan for AI has been published a few weeks in advance of the 2nd Summit.

Second Stage

The Second Stage will start when a highly globally diverse group of states -  making up at least 30% of the GDP and 30% of the world population, and including at least 2 states that are veto-holders of the UN Security Council or have nuclear weapons - have agreed among themselves on a Mandate and Rules v.1 for the Assembly. At that stage, they will convene the Assembly with a date, place, and budget 6, 9, or 12 months later, where all states are invited to participate. 

Such Mandate and Rules should, we believe:

  • Be approved via simple majority based on weighted voting after having extensively striven for consensus. Considering the vast disparity in power between states, particularly in AI and more broadly, and recognizing that three billion people are illiterate or lack internet access, we foresee the voting power in such assembly to be initially weighted by population and GDP;  

  • Have been widely perceived as fair, resilient, neutral, expert, and participatory, including via transnational citizens' assemblies, and a neutral and balanced mix of experts;  

  • Ensure statutory safeguards are in place to maximize the chances that the resulting IGO will not degenerate, unduly centralize too much power, or be captured by one or a few nations, and improve over time, such as via checks and balances, a solid federal structure, regular mandatory re-constituent assemblies, and other measures;

  • Ensure that the resulting charter is sent for ratification by the signing states and becomes valid if 9/13th of them approve it, the same ratio used after the US Constitutional Convention of 1787;

  • Have established an open dialogue and convergence paths with competing or complementary international governance initiatives, especially those led by digital superpowers, and positioning itself as a medium-term complement for possible more urgent initiatives, albeit less multilateral, aimed to tackle urgent global safety risks;

  • By this time, or sooner, it is hoped that the US and China, other powerful states like Russia and Israel, and permanent members of the UN Security Council will all have joined. If China, the US, and most of those other states have joined by then, there can be enough positive and negative incentives exerted by participant states on non-joining states to keep their AI capabilities under reliable control (provided that AI supply chain control - and the public availability or hackability of dangerous AI technologies - have been mitigated early on via great powers coordination);

As decided, 6, 9, or 12 months later, the Open Transnational Constituent Assembly for AI and Digital Communications is held in Geneva. Once it is clear the new organization will be created, we believe it will be participated in by at least 60 states, which are highly globally diverse and make up at least 50% of the GDP and 50% of the world population, and include at least 3 states that are veto-holder of the UN Security Council or have nuclear weapons: 

  • States non-participating to the Assembly can ratify it as well;

  • Charter review provisions of the charter enable non-participating states to join in charter review conferences, initially held every 2 years, with equal rights;

Early ratifying states commit large enough funds to begin setting up the three agencies, including the Global Public Benefit AI Lab.