
Coalition for a Baruch Plan for AI
We stand on the brink of an irreversible slide towards AIs that are capable enough to subjugate or annihilate humanity, or enable a state or firm to do so.
While it is quite late, we can still prevail and realize AI's astounding opportunities, but only if we come together skillfully like never before.
While an AI-driven immense concentration of power and wealth is unfolding before our eyes, the leading US and Chinese AI scientists recently warned that catastrophic risks of AI for human safety could materialize in a few years or even "at any time". While nearly all states stand powerless, on their own, in tackling these risks, even the US and China can’t avoid a catastrophic AI proliferation without a wide global participation.
The Coalition aims to bring together pioneering NGOs and states to build a bold, federal, global intergovernmental organization for Artificial Intelligence - akin to the bold 1946 Baruch Plan for nuclear technologies - while resiliently affirming the subsidiarity principle. To succeed, it is paramount to adopt a open treaty-making process that is extraordinarily bold, timely and effective, based on proven models.
Announced on September 10th by:
Read our latest blog post:
From a Mad Race to Superintelligence to a Global Treaty for Safe AI for All.
What is the Baruch Plan?
On June 14, 1946, as a global race to build ever more powerful nuclear bombs was anticipated, the United States proposed to the United Nations the Baruch Plan: a bold solution to create a new global democratic agency to bring under exclusive international control all dangerous research, arsenals, facilities and supply chain for nuclear weapons and energy - to be extended to all other globally dangerous technologies and weapons. While it ultimately failed, the Plan remained for years the official U.S. nuclear policy.
The Idea of a Baruch Plan for AI
Current AI governance initiatives by superpowers, IGOs and the UN are severely insufficient in scope, timeliness, inclusivity and participation. Nothing less than a Baruch Plan for AI can reliably tackle AI’s immense risks for human safety and for unaccountable concentration of power and wealth, and realize its astounding potential. Awareness of this need is mounting. Some of the most influential AI experts and leaders have referred to or outright called for the Baruch Plan as a model for AI governance, including Yoshua Bengio, the most cited AI scientist, Ian Hogarth, (UK AI Safety Institute), Allan Dafoe (Google DeepMind), Jack Clark, (Anthropic), Jaan Tallinn, (Future of Life Institute), and Nick Bostrom.
8-minutes Video Explainer
A Better Treaty-Making Method
Awareness of the risks and need for coordination is rapidly growing. But turning such consensus into a suitable and timely treaty among even a moderate number of powerful states is completely impossible by relying on the dominant treaty making methods, as shown by the utter failure of nuclear and climate treaties. We need a much more effective, high-bandwidth, time-bound, and inclusive treaty-making process. We need one with a set start and end date, a clear mandate, and supermajority rulesto prevent any state's vetos. We need something extreme, as the circumstances are, but also historically proven, as we only have one shot. Fortunately, we can rely on the proven successes of the intergovernmental constituent assembly treaty-making model, that in their most successful examples led quickly to the US and Swiss federations.