AI Experts’ Calls for a Baruch Plan for AI
Over a few frantic months following the Hiroshima Bomb in August 1945, a broad consensus emerged among top US nuclear scientists about the immense challenge of preventing a nuclears arms race and proliferation, as well as the technical and the feasibility of international controls.
Even more than the Hiroshima Bomb, it was their wide consensus, tireless activism, scientific work - and in particular their ultimate success in convincing US President Truman, his advisors and top officials - that led to the publication of the Acheson-Lilienthal Report, largely written by Robert Oppenheimer.
This provided the scientific and geostratigic basis, and the full details of the Baruch Plan, a proposal by the US to the UN on June 14th 1946, to create an extremely-bold, democratic, federal global organization entrusted with the exclusive international control of all dangerous nuclear technologies, research, facilities and source materials.
The same is happening for AI, albeit not nearly at the pace of shocking AI advancements and proliferation of recent years, months and weeks.
Leading independent AI experts and scientists have been writing papers and public statements about AI's immense and urgent risks and calling for strong global and democratic governance of AI.
Some of them have suggested or outright called for a Baruch Plan for AI:
In 2014, Nick Bostrom referred to the Baruch Plan in his foundational Superintelligence book as a (positive) future scenario for the governance of AI. In 2018, Ian Hogarth, Chair of the UK AI Safety Institute, argued in a detailed blog post that the Baruch Plan is a necessary model for the governance of AI. In 2019, the US Center for Security and Emerging Technologies (CSET) suggested that the Baruch Plan should serve as a model for AI governance.
In 2021, Allan Dafoe, President of the Oxford Centre for Governance of AI and current Head of Long-Term AI Strategy and Governance at Google DeepMind - together with Waqar Zaidi - wrote a deeply researched 70-page paper "International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons" with a strong focus on application to AI.
In May 2023, the Economist reported that Jack Clark, Cofounder and Global Head of Policy of Anthropic, suggested the Baruch Plan for the global governance of AI.
In June 2023, Rufo Guerreschi, executive director of the Trustless Computing Association, extensively called for a Baruch Plan for AI in the first version (of 3) of its Harnessing AI Risk Proposal, announcing its Harnessing AI Risk Initiative.
In December 2023, Jaan Tallinn suggested the Baruch Plan in a podcast interview as a potential solution to the global governance of AI.
In July 2024, Yoshua Bengio, the most cited AI scientist, mentioned the Baruch Plan in a podcast interview as an "important avenue to explore to avoid a globally catastrophic outcome." In August 2024, the leading AI safety NGO ControlAI published a blog post making a case for it.
In September 2024, the Trustless Computing Association convened 5 other NGOs to announced the Coalition for a Baruch Plan for AI. In December 2024, the Coalition was launched with an Open Call and a 90-page Case for a Coalition for a Baruch Plan for AI, with contribution by 22 experts.
Sure, renowned experts called for it, but Why a Baruch Plan for AI? Can it serve as a model?
Ultimately, the Baruch Plan failed, How do we Avoid the Failure of the Baruch Plan?