1st Newsletter - Coalition for a Baruch Plan for AI 

One month has passed since we announced the Coalition for a Baruch Plan for AI. We've set out to foster the creation of a democratic and federal global organization for AI that can be bold and timely enough to reliably manage its immense risks and realize its astounding promise. Just as the Baruch Plan proposed for nuclear technologies in 1946. 

Recent news confirm that nothing less is required. Superpowers and leading AI labs are accelerating a reckless winner-take-all AI race for dominance. This is leading straight into an immense unaccountable concentration of power and dystopia, or loss of control and human extinction, in a very short time. 

While still volunteer based, we are proud to have already achieved significant progress towards our mission. This was made possible great dedication, frugality, efficiency, and of our partners, advisors and members of our working groups.

(Repost, Like or Comment this Newsletter on X/Twitter and on Linkedin.)

Table of Contents:

  • News about Global Governance of AI

  • Updates about our Coalition

News about Global Governance of AI

  • A mad AI race to the brink. While AI investments and capabilities keep accelerating at a mind-boggling rate, it is ever more clear it is states do and will call the shots. An all-in, reckless, winner-take-all race among superpowers and their top firms for AI-driven dominance is entrenching, akin to the 1944-46 period for nuclear technologies. A host of "safety summits" have produce nothoing more than generic and light statements of intents. 

  • Urgency of AI Safety Risk. In a Venice Statement on AI Safety Risk, the leading US and Chinese AI scientists warned of potential catastrophic risks materializing within a few years or even "at any time". They confirmed their view that AI Safety should be recognized as a Global Public Good and suggested a number of constructive steps to be taken in the short term.  

  • United Nations. The UN Global Digital Compact and the Final Report of the UN Advisory Body on AI were published. These are welcome as they emphasize the key risk of safety risks and concentration of power, the need for stronger global governance, the need to regulate AI together with digital communications, and civil society involvement. Yet, it defers the advancement of stronger global institutions to a future when the risks levels are higher, and lacks any specifics on effective and inclusive treaty-making processes that can deliver the global governance we need. 

  • US and China Treaty-making, Dialogue, Race, Prospects. From what is known publicly, any dialogue or treaty-making process about AI among the AI superpowers is moving very slowly, with except sporadic encounters producing no agreements that are remotely up to the challenge. The stark awareness of 1946 of immensity and the inherent global nature of the AI safety risks appear still very far away, as they did in 1945. The rhetoric of the US is still one of unilaterally rallying aligned and client states, and its top AI labs, in a pitched battle against the "evil China AI risk", as confirmed by the latest one organized by the US next month in San Francisco. Meanwhile, the risks of the entrenching of autocratic regimes in either or both AI superpowers is ever more present, prospecting a one-state or two-state AI-enabled global autocratic dystopia if not extinction.

  • US and China National Regulations. While national legislation cannot prevent nor mitigate the safety risks that are inherently global, efforts to promote national legislation are useful to raise awareness of the risk. In the context of an all-out AI arms race, it is not surprising that the US and China are not implementing any suitable national AI safety safeguards. The US legislative branch has produced no meaningful legislation, so far. A US Presidential Executive Order, approved last November sets some limits and reporting requirements, but is very far from ensuring the needed enforcement and does nothing about open source AI. A California bill setting some very moderate safeguards was vetoed by the Governor claiming he wanted to have a stronger one in a few months. China approved some laws to prevent private power concentration, regime subversion, and political manipulation, but no serious controls on safety risk, that we know of. 

  • Top US AI Labs Chargin Full Steam Ahead. The US government's rhetoric is increasingly uptaken by leading AI CEOs and experts - such as Dario Amodei's Machine of the Loving Grace and Leopold Aschenbrenner's Situational Awareness. Calls by China for a strong and inclusive global governance of AI, are not yet supported by any decisive diplomatic or treaty-making action While last year, along top AI scientists, they were the first to loudly warn about the immense risks of AI and need of strong international coordination, top AI lab CEOs appear to have grown highly skeptical of the prospects of a suitable global governance, highly pressured by states nationals security and geopolitics, and charging ahead to build Superintelligence before others do, to imprint their values in it and/or try to make it safe.

Updates about our Coalition

  • Held Events: We hosted a 3-hour presentation to 40 members of the Rotary Club International of Geneva and an Info Session at the UN Summit of the Future.

  • Upcoming Events: On November 7th we'll hold a hybrid 2nd Pre-Summit in Geneva. It follows our 1st Pre-Summit held last June 12th along the G7 Summit in Italy. A much larger two-day 1st Harnessing AI Risk Summit, will be held November 28-29th, also hybrid in Geneva.

  • Onboarding NGOs and experts: We started engaging leading NGOs and experts that support a strong, timely and democratic global governance of AI - including seven top AI experts that have referred to the Baruch Plan as a governance model for AI. We've onboarded 8 new experts and 5 NGOs, and created new Working Groups.

  • Onboarding States: We plan in the next few weeks to start reaching out to states with a cohesive and comprehensive offer to join our Coalition and Summits. We'll build from the extensive interest we received last Spring and Summer, with bilateral meetings with several interested states, especially from Africa and Europe. We've received formal written interest from the head of mission to the UN of one of the two largest regional intergovernmental organization.

  • New Content

  • New Partners' Content

    • An influential paper, A Narrow Path, was co-authored by the head of our founding member AITreaty, Tolga Bilge. A great contribution, it addresses head-on the risk of loss of control of Superintelligence (ASI) putting forth a comprehensive proposal for a phased creation of global institutions for AI.

    • The Trustless Computing Association published an extensive v.4 of its Harnessing AI Risk Proposal, that expands upon the Coalition's key ideas. 

    • The Trustless Computing Association submitted a proposal to flexHEG, a call to develop critical hardware-based systems for verifying and enforcing global compliance of advanced AI systems with safety and ethical guidelines. FlexHEG was issued by Jaan Tallinn's Survival and Flourish Foundation, inspired by the Guaranteed Safe AI safety concepts, by Davidad, Bengio, Russell and Tegmark. Such proposal was based on the recently published revised version of TCA's Master Plan for a Trustless Computing Certification Body.   

Join Us, Participate or Show Your Support

We invite you to join us and participate in shaping this critical initiative:

Warm Regards,

The Coalition for a Baruch Plan for AI

(Repost, Like or Comment this Newsletter on X/Twitter and on Linkedin.)

Rufo Guerreschi

I am a lifetime activist, entrepreneur, and researcher in the area of digital civil rights and leading-edge IT security and privacy – living between Zurich and Rome.

https://www.rufoguerreschi.com/
Previous
Previous

2nd Newsletter - Coalition for a Baruch Plan for AI

Next
Next

Joint Announcement of the Coalition for a Baruch Plan for AI