AI Experts’ Calls for a Baruch Plan for AI

While calls by the leader of the top US AI labs for strong, democratic global governance of Ai and a global AI lab abound, some of the most influential AI experts have referred to the Baruch Plan as a model for AI governance, as we are advocating via our Coalition for a Baruch Plan for AI.

After the Hiroshima bomb of August 1945, over a few frantic months, a broad consensus emerged among top US nuclear scientists about the immense challenge of preventing an arms race and wide proliferation and the feasibility of international controls.

Even more than the bomb, it was their consensus—and their tireless activism, scientific work and ultimate success in convincing US President Truman, his advisors and top officials—that led to the publication of the Acheson-Lilienthal Report, largely written by Oppenheimer, and then the presentation of the Baruch Plan to the UN in June 1946.

The same is happening for AI. Top Western AI experts and scientists are writing papers and public statements about AI's immense and urgent risks and calling for strong global and democratic governance of AI inspired by the Baruch Plan.

Most recently, on July 7th, 2024, Yoshua Bengio, the most cited AI scientist, mentioned the Baruch Plan in a podcast interview as an "important avenue to explore to avoid a globally catastrophic outcome."

In December 2023, Jaan Tallinn suggested the Baruch Plan in a podcast interview as a potential solution to the global governance of AI.

In May 2023, the Economist reported that Jack Clark, Cofounder and Global Head of Policy of Anthropic, suggested the Baruch Plan for the global governance of AI.

In 2021, the Baruch Plan application to AI was explored in depth and endorsed as an inspiration in a paper titled "International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons" by Waqar Zaidi and Allan Dafoe, President of the Centre for Governance of AI of Oxford and current Head of Long-Term AI Strategy and Governance at Google DeepMind.

In 2018, Ian Hogarth, Chair of the UK AI Safety Institute, argued in a detailed blog post that the Baruch Plan is a necessary model for the governance of AI.

In 2014, Nick Bostrom referred to the Baruch Plan in his foundational Superintelligence book as a (positive) future scenario for the governance of AI.

In June 2023, Rufo Guerreschi, executive director of the Trustless Computing Association, extensively referred to the Baruch Plan as a critical inspiration for the global governance of AI in the first version (of 3) of its Harnessing AI Risk Proposal, announcing its Harnessing AI Risk Initiative. This eventually led to the convening of the Coalition for a Baruch Plan for AI.

Also, in 2019, the US Center for Security and Emerging Technologies (CSET) suggested that the Baruch Plan should serve as a model for AI governance, and more recently, the NGO ControlAI published a short post making a case for it.

Sure, renowned experts called for it, but Why a Baruch Plan for AI? Can it serve as a model?

Ultimately, the Baruch Plan failed, How do we Avoid the Failure of the Baruch Plan?