The Baruch Plan. A Model For AI?

Back in 1946, as the world grappled with the looming threat of nuclear weapons, the United States proposed a bold solution to the United Nations to stave off the immense risks of nuclear war and share equitably the economic benefits of nuclear energy.

In six dense pages, the Baruch Plan (original document, its wikipedia page) proposed that all potentially dangerous capabilities, arsenals, research, source materials, and nuclear weapons and energy facilities anywhere on the globe fall under the strict control of a single entity: the International Atomic Development Authority. New nuclear energy plants would be located equitably worldwide, built by national programs, public or private, but strictly licensed, controlled and overseen by it.

Such Authority would enforce a global ban on any further development of more destructive and dangerous nuclear weapons and would not advance them further. 

It took less than one year after the Hiroshima Bomb for the unthinkable to become formal diplomatic US policy, and so it was for many months until the Cold War ensued. Even more than the Hiroshima bomb, the deep work of the Acheson-Lilienthal Report—largely written by Oppenheimer, other leading scientists and the US Secretary of State at the time—was decisive. 

Such a report was vital in convincing US President Truman that the risks of nuclear proliferation were so significant that they could not be countered via unilateral action and that global bans and rules could be reliably enforced.

Just 5 days later, the Russians counter-proposed a Gromyko Plan, which, in some ways, went even further by banning and destroying all nuclear weapons. While most nations supported it, along with figures like Albert Einstein and Bertrand Russell, they failed to agree on a middle ground. A loose coordination among their national security agencies was called to fill such political failure.

While an arms race ensued, with nuclear bombs 3000 times more powerful than Hiroshima tested by both the US and Russia by 1951, a nuclear catastrophe was somehow averted so far. Yet, several near-miss accidents brought us an inch away from the brink, and today, the threat of nuclear armageddon is more significant than ever. We should be immensely thankful to those security agencies and for the luck that mitigating catastrophic proliferation turned out to be relatively manageable - though at the cost of a global surveillance apparatus. 

The proposed Authority's governance was modeled on that of the UN Security Council—with 11 states, including the five permanent UN veto-holders and six non-permanent members elected every two years by the UN General Assembly. Crucially, no state would have a veto.

The Plan prescribed that the Authority would progressively extend its control over all existing and future advanced weapons, including biological and chemical agents

The Authority would have amounted effectively to eliminating the veto of the permanent members of the UN Security Council, turning the UN into nothing less than a federal and democratic world government, and effectively banning large-scale war. 

Can the Baruch Plan serve as a model for AI?

We have a second chance with AI.

As in 1945, the release of ChatGPT set out a reckless, winner-takes-all race among superpowers and their leading firms for ever more capable AGI and ASI forms of AI for economy and military supremacy, forcing all to underinvest in safety safeguards and push ahead full steam to avert the risk of falling behind.

As in 1945 and 1946, a consensus is fast expanding among leading scientists, citizens, national security agencies and heads of state about the need for a strong, global, federal organization to manage the risks of the enormous difficulty of preventing catastrophic AI accidents or proliferation. 

As in 1946, the US had a few years' lead over other nations. 

As in 1946, the US does not have sufficient control of AI supply chains or source material today to prevent others from advancing their capabilities exponentially. 

As in 1946, in the face of AI's immense risks and opportunities, even superpowers could be compelled to reconsider highly ambitious plans for global governance to manage it. 

Support for the idea is mounting.

In recent years and months, there have been many Experts' Calls for a Baruch Plan for AI, including Yoshua Bengio, the most cited AI scientist, Ian Hogarth (UK AI Safety Institute), Allan Dafoe (Google DeepMind), Jack Clark (Anthropic), Jaan Tallinn (Future of Life Institute) and Nick Bostrom.

According to a recent University of Maryland survey, 77% of US voters and hundreds of top researchers supported a comprehensive, detailed and strong international treaty for AI, similar to a Baruch Plan of AI. The same was true for hundreds of experts who signed the AITreaty.org open letter last spring.

Over the last 18 months, AI Labs' Calls for a Democratic Global Governance and A Global AI Lab have been increasing, including by the CEOs of Anthropic, OpenAI and Google Deepmind.

But then How do we Avoid the Failure of the Baruch Plan?