Case for a Coalition for a Baruch Plan for AI (v.1)

Published on December 18th, 2024, the Case is an extensive 90-page “pre-print” paper by 2 co-authors and 22 distinguished contributors, detailing the aims and plans of the Coalition for a Baruch Plan for AI.

As a key part of the Launch of the Coalition, on the same date, it includes a copy of our Open Call, and of the Executive Summary Webpage as of the date of printing. An Appendix 1.1 to the Case v.2 are being drafted for January 2025, while a version 2 is due in April 2025.

Download the Paper from ResearchGate

Abstract (as published)

This text makes a case for a Coalition for a Baruch Plan for AI. It presents a comprehensive proposal for the timely establishment of a global representative intergovernmental organization to exclusively manage all potentially dangerous AI research, facilities, and source materials for both civilian and military use of AI - similar in scope to that proposed by the United States to the United Nations on June 14th, 1946, with regards to nuclear technologies, via the Baruch Plan.

It argues that nothing less can be expected to successfully manage the immense, urgent and rapidly mounting risks that AI poses for humanity - resulting from misuse, accidents, concentration of power and wealth, loss of human control and human extinction - while realizing AI’s astounding potential benefits for all of humanity. 

It argues that current global governance initiatives by superpowers or intergovernmental organizations are radically insufficient in their urgency, scope, scale and or inclusivity and rely on an unstructured treaty-making model that has utterly failed for nuclear weapons and climate change, and for the Baruch Plan itself.

It argues why we should instead rely on the radically more efficient and faster intergovernmental constituent assembly treaty-making model, pioneered by the constituent process that led to the US federal constitution of 1787, and utilized later with success in Europe and Asia.

The mandate of such an assembly should include creating highly empowered agencies with functions, tools and governance proportionate to the challenge. We group these into an AI Safety Agency, an IT Security Agency, and a Global Public Benefit AI Lab.

Download the Paper from ResearchGate