AI Labs' Calls for a Democratic Global Governance and Lab

While mostly overlooked by mainstream media, for over 18 months, the leaders of Anthropic, OpenAI and Google Deepmind have repeatedly pointed to the dire need for strong, democratic and global governance and coordination.

Their calls are mostly justified by the need to stave off the rackless winner-take-all race they are on and ensure that the enormous power and wealth that the most potent future AIs will be shared democratically among humans.

Google DeepMind

Google DeepMind published last July 2023 Exploring institutions for global AI governance", a detailed "exploration" of the feasibility of creating four new IGOs for AI, including a Frontier AI Collaborative, an "international public-private partnership" to "develop and distribute cutting-edge AI systems, or to ensure such technologies are accessible to a broad international coalition." 
  Its CEO, Demis Hassabis, was interviewed last February 2024. When confronted with the governance of Google as a typical corporation, legally bound to maximize profit for its shareholders, and the prospect of it being in control of transformative AGI, he gave a reply that concluded: "In five or ten years, as we get closer to AGI, we'll see how the technology develops and what stage the world is in, and the institutions in the world like the UN and so on, which we engage with a lot, I think we have to see how that goes and the engagement goes in the next few years."
  In August 2024, Hassabis was posed this question at minute 39.50 of this DeepMind Podcast: "I know that your big mission is to build Artificial Intelligence that to benefit everybody but how do you make sure that it does benefit everybody? How do you include all people's preferences rather than just the designers'?"
   To which Hassabis replied, "I think what's going to have to happen. I mean, it is impossible to include all preferences in one system, because by definition people don't agree. We can see that, unfortunately in the current state of the world. Countries don't agree. Governments don't agree. We can't even get agreement on obvious things like dealing with the climate situation. So, I think that's very hard.
   What I imagine will happen is that, you know, we'll have a set of safe architectures, hopefully, that personalizes AI can build on top of. And the everyone will have, or
different countries will have their own preferences about what they use it for, what they deploy it for, and what can and can't be done with them. But overall, but that's fine, that's for everyone to individually decide, or countries to decide themselves, just like they do today. But as a society, we [will] know that there are some provably safe things about those architectures. And then you can let them proliferate, and so on. So, I think that we kind of have to go through a eye of a needle in a way where, as we get closer to AGI, we've probably got to cooperate more, ideally internationally, and then make sure we build AGIs in a safe architecture way. Because I'm sure there are unsafe ways, and I'm sure there are safe ways to build AGI. And the once we get through that we can then open the funnel again, and everyone can have their own personalized pocket AIs, … if they want." 

OpenAI

OpenAI's CEO Sam Altman stated in October 2023 that control over OpenAI and advanced AI should eventually be distributed among all citizens of the world. He stated that “we shouldn’t trust” OpenAI unless its board "years down the road will not have sort of figured out how to start” transferring its power to "all of humanity." After OpenAI’s governance crisis, he repeated that people shouldn’t trust OpenAI unless it democratizes its governance. He then repeated that all of humanity should be shaping the future of AI.
  On February 24th, OpenAI stated in its revised mission, “We want the benefits of, access to, and governance of AGI to be widely and fairly shared.”
  In the wake of OpenAI’s proposal of a public-private “$7 trillion AI supply chain plan,” he called again for international governance at the UAE World Government Summit but clarified that “it is not up to them” to define such constituent processes, so he called on states, such as the UAE, to convene a Summit aimed at the creation of an “IAEA for AI,” to which the Ministry of AI of UAE reply affirmatively. He even stated that if humanity jointly decided that pursuing “AGI” was too dangerous, they would stop all “AGI” development. "We'd respect that”, he replied.
   In March 2023, Altman even stated that his "platonic ideal" of building a global governance of AI would be a global constituent assembly for AI akin to the U.S. Constitutional Convention of 1787, which established a federal intergovernmental organization to manage AI in a decentralized and participatory way, according to the subsidiarity principle. 

Anthropic 

Anthropic CEO, Dario Amodei, has given fewer interviews but has been as vocal as Altman in advocating for a global democratic governance of AI as the only way to avoid immense safety risks and enormous concentrations of power.
  In an interview in August 2023, in a seven minute reply starting from this youtube video frame, he clearly specified that solving the technical half of the AGI alignment problem would, by definition, create an immense undemocratic concentration of power unless the global governance half of AGI alignment was also solved, and that eventually, some global body should be in charge of all advanced AI companies.
  After he explained the non-profit structure controlling the company, he was asked who would be in control if Anthropic found itself at the forefront of achieving world-changing breakthroughs in AGI. He replied, "That doesn't imply that Anthropic or any other entity should be the entity that makes decisions about AGI on behalf of humanity. I would think of those as different things. If Anthropic does play a broad role, then you'd want to widen that body to a whole bunch of different people from around the world. Or maybe you construe this as very narrow, and then there's some broad committee somewhere that manages all the AGIs of all the companies on behalf of anyone."
  He ended by saying, "I don't know. I think my view is that you shouldn't be overly constructive and utopian. We're dealing with a new problem here. We need to start thinking now about the governmental bodies and structures that could deal with it."

Other

OpenAI’s Chief Scientist Ilya Sutzkever stated, “it will be important that AGI is somehow built as a cooperation between multiple countries.”

Yoshua Bengio called for a multilateral network of AI labs, analyzing in fine detail the right balance of global and national authority over them.  In a recent interview (1.5 min video clip) Yoshua Bengio, suggested that top US AI labs may be further enticed towards "internationalization" rather than "nationalization" due to the risks of near-term authoritarian political shifts in AI superpowers, to avoid the risk of falling largely or wholly under the control of an unreliable, undemocratic or authoritarian power in the near future.

Hence, there reasons to believe that - while US AI Labs are very much subordinate the the US government policy and more and more as AI becomes a key military competitive issue, a bold democratic global governance - inclusive of a Global Public Benefit AI Lab - could not only attract many top AI talents based on its superior mission, but it could also attract close collaboration or full participation by some leading US AI Labs and other states.