Open Call for a Coalition for Baruch Plan for AI

December 18th, 2024


Abstract

We call on all heads of state, President-Elect Trump and President Xi, and their advisors and security agencies, to engage in an open global treaty-making process for safe and fair AI of radically unprecedented scope, urgency and effectiveness.


We stand on the brink of an irreversible slide toward AIs that are capable enough to subjugate or annihilate humanity or enable a state or firm to do so. 

While the leading US and Chinese AI scientists warned that catastrophic risks of AI for human safety could materialize in a few years, or even "at any time," an immense AI-driven concentration of power and wealth is unfolding.

We are not indulging in hyperboles here: the risks are far beyond anything humanity has ever seen. So, the needed global organization and treaty-making will have to be of a similar scale: radical, far-reaching, and far outside business as usual.

While it is quite late we can still prevail and realize AI's astounding opportunities, but only if we come together skillfully like never before.

This requires nothing less than a powerful and democratic global federal organization with exclusive control over all potentially dangerous AI research, facilities, and source materials - akin in scale to the one proposed by the US to the UN in 1946 to manage nuclear weapons and energy via the Baruch Plan.

The United States and its leading AI firms are increasingly aligned in a winner-take-all ideological AI arms race against China, pulling along allies and client states. China has consistently called for a strong global and equitable AI coordination, while however taking little action in that direction and appearing to be pursuing the same race and hegemonic dynamics.

While some dialogues have occurred, the situation is overwhelmingly one of an accelerating race. Meanwhile, all other states are completely powerless on their own to face the primary risks to their citizens.

While consensus on the risks and the need for strong global coordination among states and citizens is already widespread and rapidly increasing, initiatives by intergovernmental organizations and the United Nations are severely lacking in scope, urgency, mandate and/or inclusivity. Such initiatives are largely victims and instruments in a China-US AI arms race and rely on the same treaty-making model that has consistently failed for climate change and nuclear weapons, starting from the Baruch Plan itself,

Hope largely rests on enacting a treaty-making model for AI that is radically better: much more effective, high-bandwidth, resilient, faster and inclusive. We need something radical, as the circumstances demand, yet based on proven models, since we may only get one chance. 

Such a model must balance extreme urgency with the utmost care, to ensure that the resulting global organization preserves and expands the freedoms of states, communities, and citizens.

Such a model must be highly participatory and high-bandwidth to succeed: engaging tens of thousands of diplomats and experts, security officials, independent experts, leading firms, and citizens' assemblies - all working intensely, in person, for months on end, in a structured process.

Such a model needs a clear mandate to align expectations, reasonable supermajority rules to prevent veto deadlock, and set start and end dates to ensure timely results.

Fortunately, a proven treaty-making model fits all those criteria: the intergovernmental constituent assembly. This model was pioneered with great success in the constituent process that led to the US federal constitution in 1787, followed by several other successes in Asia and Europe. 

While such a model has historically proven to be faster than any other model for multilateral treaties and able to achieve vastly more resilient and impactful treaties, it would benefit from a fast-tracked, temporary bilateral China-US AI treaty to enact controls for the most urgent and immediate risks.

This diplomatic surge must pair with a global public-private AI consortium of hundreds of billions of dollars to build the most advanced, safe and controllable AI and a global data and communications platform, for all to share, to be started immediately by even a small open coalition of states, firms and funders.

If we succeed, the benefits of AI will be astounding. Ever-more-advanced, safe and controllable AIs will bring tremendous benefits to humanity. Success would establish a governance model extensible to other inherently global challenges and public goods, like climate, nuclear, bioweapons, and our digital public sphere.

Hence, we call on all heads of state, President-Elect Trump and President Xi, their advisors and security agencies, to engage in an open global treaty-making process for safe and fair AI of radically unprecedented scope, urgency and effectiveness.

Given that superpowers are deadlocked, and other states are hesitant to take the lead, each locked into complex geopolitical constraints, enlightened philanthropists and NGOs can and should play a unique, vital and historical role.

The challenge is enormous. Success is uncertain. It may be tempting to succumb to powerlessness, by sticking our heads in the sand or watching doom unfold from the sidelines. 

But how can we find true peace or look our children in the eyes if we do not at least try to do something to save their future? We have the unique privilege of having agency in the most consequential years of human history. 

After all, is there anything more exhilarating and fulfilling than striving with good spirits and firm resolve to steer the course of history toward a positive future?

Let’s strive together with joy in doing the best we can to solve Humanity's greatest challenge, to ensure AI will turn out to be humanity's greatest invention rather than its worst, and last.

- - -


Inspirations: This Open Call builds upon similar calls and proposals, including the Open Call for the Harnessing AI Risk Initiative by the Trustless Computing Association, the Open Call Urging an International AI Treaty (AItreaty.org) by Tolga Bilge, and the Proposal for an International AI Treaty by Pause AI.

Undersign the Open Call

Please, state your personal support below for the Open Call in its entirety or just in its Abstract. By doing either, you do not necessarily support other content on this website. You will receive soon an email for confirmation with the entire text of this webpage.


Author

(Italy) Rufo Guerreschi
Coordinator of the Coalition for a Baruch Plan for AI and
Executive Director at the Trustless Computing Association.

Contributing Undersigners

(USA) Felix De Simone*
US Outreach Lead of Pause AI; Member of the Executive Committee of CBPAI.

(USA) Wendell Wallach*
Emeritus Chair of Technology and Ethics Research Group at Yale Interdisciplinary Center for Bioethics, and Carnegie Fellow.

(China) Xiaohu Zhu
Founder at Center for Safe AGI; AI Existential Safety Researcher at Future of Life Institute; Foresight Fellow at Foresight Institute. CBPAI Advisor.

(UK/Canada/Russia) Boris Taratine
IT security expert. Was Chief Cyber Security Architect at Lloyds Banking Group and Principal Security Architect at VISA Europe. CBPAI and TCA Advisor.

(Switzerland/US) Mark Barwinski*
Former Group Head of Cybersecurity Operations at UBS, the largest wealth management bank in the world. Was at Siemens and the TAO unit of the US National Security Agency.

(Netherlands) Joep Meindertsma*
Spokesperson and Information Infrastructure Lead; Member of the Executive Committee of CBPAI.

(Norway) Tolga Bilge*
Policy Researcher at ControlAI; Member of Secretariat at CBPAI.

(The Gambia) Amb. Muhammadou Kah
Chair of the UN Commission of Science and Technology for Development;  Ambassador of The Gambia to Switzerland and the UN in Geneva. CBPAI Advisor.

(UK) Robert Whitfield
Chair of the Working Group on AI of the World Federalist Movement. CBPAI Advisor.

Organizational Undersigners

Trustless Computing Association is a Geneva-based international non-profit dedicated since 2015 to facilitating a democratic, timely and efficient treaty-making process to create proper global federal governance of AI and digital communication - primarily through its Harnessing AI Risk Initiative.

Pause AI Global is an international non-profit promoting a Proposal for a robust global democratic organization to regulate AI. It has gained comprehensive mainstream media coverage and is widely known in US/US AI safety circles.

International Congress for the Governance of Artificial Intelligence (ICGAI) an international initiative (now dormant, active 2018 and 2021) that brought together impressive and globally diverse advisors and speakers to advance a participatory global governance of AI.

AITreaty.org is an initiative to advance the development and ratification of an international AI treaty to reduce the catastrophic risks of AI and ensure the benefits of AI for all.

European Center for Peace and Development and University of Peace of the United Nations (ECPD). Established in 1979 by the UN General Assembly, it is a leading educational, research and advocacy institution for the promotion of global peace, reconciliation, socio-economic development and international cooperation.

Center for Existential Safety, a US-based global organization whose aim is to galvanize collective action to ensure humanity survives this decade. If we can achieve that, then we are likely to create an unimaginably good future for all.

Association for Long Term Existence and Resilience. Works to investigate, demonstrate, and foster useful ways to improve and to safeguard the future in the short & long term of humanity.

Pause AI USA. A grassroots organizing for an international treaty to pause advanced AI training until we know how to proceed safely.

Individual Undersigners

(Undersigning is exclusively in personal capacity)

(USA) Richard Falk
Emeritus Professor at Princeton University; Advisor of the TCA, and Expert in Global Democratic Governance. CBPAI Advisor.

(Italy) Yernur Kairly
AI Existential Risk Researcher and Advocate; Research Associate at the Consortium for AI and Existential Risks.

(USA) Holly Elmore
Executive Director of PauseAI US, grassroots organizing for an international treaty to pause advanced AI training until we know how to proceed safely.

(USA/Russia) Nikola Danaylov
Futurist Keynote Speaker on Tech, Story, and Ethics.

(USA) Chase Cunningham
Co-founder of the Zero Trust IT security paradigms. Former Chief Cryptologist and the US National Security Agency.

(UK) David Dalrymple
Director of ARIA’s Safeguarded AI Program; Lead Author of the Towards Guaranteed Safe AI paper. CBPAI Advisor.

(Italy) Roberto Savio. Director of External Relations at the European Center for Peace and Development and University of Peace of the United Nations (ECPD).

(Brazil) Edson Prestes
Member of the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM). Head of the Robotics Research Group, Informatics Institute, Federal University of Rio Grande do Sul.

(Switzerland) Alexandre Horvath
Chief Information Security Officer & Data Protection Officer at Cryptix AG,. CBPAI and TCA Advisor.

(Israel) David Manheim. Director and Founder of the Association for Long Term Existence and Resilience

(Brazil) Flavio S Correa da Silva
Former Director of the Research Center on Open Source Software at the University of São Paulo. CBPAI Advisor.

(France) Maxime Fournes. CEO of Pause AI France.

(Switzerland) Jan Camenisch
Chief Technology Officer at DFINITY. Advisor at TCA and CBPAI. Holder of over 140 patents.

(USA) Mehmet Sencan
Hardware security and Guaranteed Safe AI Expert; Hardware Research Consultant at Atlas Consulting. CBPAI Advisor.

(USA) John Havens. Founding Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Member, World Economic Forum Global Future Council on Human Rights and Technology.

(Italy) Davide Cova
International Human Rights Expert and Buddhist Teacher; Director of TCA.

Undersigners of the Abstract of the Open Call

(Undersigners only of the 37-Word Abstract, i.e. without endorsing the entire text of the Open Call text or other content on this website)

(UK) David Wood
Chair of the London Futurists and Advisor of TCA; Fellow at the Institute for Ethics and Emerging Technologies. CBPAI Advisor.

(Northern Ireland) Nell Watson
AI Ethics & Safety Engineer. President, EURAIO and EthicsNet

(UK) Mikhail Samin
Executive director of the AI Governance and Safety Institute

Endorsement by States

None to Date. Be the first.

Legend:

* Tentative: Endorsed by high official interested in reviewing the final version of "Preliminary To-Be-Confirmed Endorsement."
** Preliminary To-Be-Confirmed Endorsement: It has been endorsed by high diplomatic officials but has yet to be confirmed by the Ministry of Foreign Affairs.
*** Formal Endorsement: Endorsed by the Ministry of Foreign Affairs or the Office of the Prime Ministers