AI Labs' Calls for a Democratic Global Governance and Lab
Most leading US AI labs, which last year loudly warned of safety risks and called for strong international governance of AI, have since grown highly skeptical of AI regulations and are ever more pressured by investors and the US government to align in its AI arms race.
Hence, most labs, including Elon Musk’s xAI, are more or less overtly advancing full speed to be those that will define the values of a god-like AI (Superintelligence) that they see as inevitable and unstoppable - sacrificing most or all precautions in the process for fear that others beat them in the race.
Yet, while mostly overlooked by mainstream media, from early 2023 to early 2024, the leaders of Anthropic, OpenAI, and Google Deepmind have repeatedly pointed to the dire need for strong, democratic, and global governance and coordination.
Their calls were mostly motivated by the need to stave off the reckless winner-take-all race they are in and ensure that the enormous power and wealth generated by the most potent future AI is shared democratically among humans.
Google DeepMind
Google DeepMind published last July 2023 its white paper Exploring Institutions for Global AI Governance, a detailed “exploration” of the feasibility of creating four new IGOs for AI, including a Frontier AI Collaborative, an "international public-private partnership" to “develop and distribute cutting-edge AI systems, or to ensure such technologies are accessible to a broad international coalition.”
Its CEO, Demis Hassabis, was interviewed last February 2024. When confronted with the governance of Google as a typical corporation, legally bound to maximize profit for its shareholders, and the prospect of it being in control of transformative AGI, he gave a reply that concluded: “In five or ten years, as we get closer to AGI, we'll see how the technology develops and what stage the world is in, and the institutions in the world like the UN and so on, which we engage with a lot, I think we have to see how that goes and the engagement goes in the next few years.”
In August 2024, Hassabis was asked at minute 39.50 of this DeepMind Podcast: “I know that your big mission is to build Artificial Intelligence to benefit everybody but how do you make sure that it does benefit everybody? How do you include all people’s preferences rather than just the designers’?”
To which Hassabis replied: “I mean, it is impossible to include all preferences in one system because by definition people don't agree. We can see that, unfortunately in the current state of the world. Countries don't agree. Governments don't agree. We can't even get agreement on obvious things like dealing with the climate situation. So, I think that's very hard”.
He continues: “What I imagine will happen is that [...] we'll have a set of safe architectures, hopefully, that personalized AI can build on top of. And then everyone will have, or different countries will have, their own preferences about what they use it for, what they deploy it for, and what can and can't be done with them. But overall, [...] as a society, we [will] know that there are some provably safe things about those architectures. And then you can let them proliferate, and so on. So, I think that we kind of have to go through the eye of a needle in a way where, as we get closer to AGI, we've probably got to cooperate more, ideally internationally, and then make sure we build AGIs in a safe architecture way. Because I'm sure there are unsafe ways, and I'm sure there are safe ways of building AGI. And then, once we get through that, then we can [...] open the funnel again, and everyone can have their own personalized pocket AGIs, … if they want.”
OpenAI
OpenAI's CEO Sam Altman stated in October 2023 that control over OpenAI and advanced AI should eventually be distributed among all citizens of the world. He stated that “we shouldn’t trust” OpenAI unless its board "years down the road will not have sort of figured out how to start” transferring its power to "all of humanity." After OpenAI’s governance crisis, he repeated that people shouldn’t trust OpenAI unless it democratizes its governance. He then repeated that all of humanity should be shaping the future of AI.
On February 24th, OpenAI stated in its revised mission, “We want the benefits of, access to, and governance of AGI to be widely and fairly shared.”
In the wake of OpenAI’s proposal of a public-private “$7 trillion AI supply chain plan,” Altman called again for international governance at the UAE World Government Summit but clarified that “it is not up to them” to define such constituent processes, so he called on states, such as the UAE, to convene a Summit aimed at the creation of an “IAEA for AI,” to which the Ministry of AI of UAE replied affirmatively. He even stated that if humanity jointly decided that pursuing “AGI” was too dangerous, they would stop all “AGI” development. "We'd respect that”, he claimed.
In March 2023, Altman even stated that his "platonic ideal" of building a global governance of AI would be a global constituent assembly for AI akin to the U.S. Constitutional Convention of 1787, which established a federal intergovernmental organization to manage AI in a decentralized and participatory way, according to the subsidiarity principle.
Anthropic
Anthropic CEO, Dario Amodei, has given fewer interviews but has been as vocal as Altman in advocating for a global democratic governance of AI as the only way to avoid immense safety risks and enormous concentrations of power.
In an interview in August 2023, in a seven-minute reply starting from this YouTube video frame, he clearly specified that solving the technical half of the AGI alignment problem would, by definition, create an immense undemocratic concentration of power unless the global governance half of AGI alignment was also solved, and that eventually, some global body should be in charge of all advanced AI companies.
Amodei was also asked who would be in control if Anthropic found itself at the forefront of achieving world-changing breakthroughs in AGI. He replied, “That doesn't imply that Anthropic or any other entity should be the entity that makes decisions about AGI on behalf of humanity. I would think of those as different things. If Anthropic does play a broad role, then you'd want to widen that body to a whole bunch of different people from around the world. Or maybe you construe this as very narrow, and then there's some broad committee somewhere that manages all the AGIs of all the companies on behalf of anyone.” He ended by saying, “I don't know. I think my view is that you shouldn't be overly constructive and utopian. We're dealing with a new problem here. We need to start thinking now about the governmental bodies and structures that could deal with it.”
Other
OpenAI’s Chief Scientist Ilya Sutskever stated, “It will be important that AGI is somehow built as a cooperation between multiple countries.” Yoshua Bengio called for a multilateral network of AI labs, analyzing in fine detail the right balance of global and national authority over them. In a recent interview (1.5 min video clip) Bengio suggested that top US AI labs may be further enticed towards "internationalization" rather than "nationalization" due to the risks of near-term authoritarian political shifts in AI superpowers, to avoid the risk of falling largely or wholly under the control of an unreliable, undemocratic or authoritarian power in the near future.
Conclusions
Despite these perspectives, leading AI Labs continue to race against one another toward AGI, due both to inter-lab economic competition and a growing sentiment within the US in favor of an AI race with China. Nevertheless, this race to the bottom is not inevitable and is instead due to the “logic” of competitive dynamics too strong for any single lab to overcome on its own. A Global Public Benefit AI Lab offers a way out.
Past statements by major lab CEOs therefore provide some cause for optimism. Hence, there are reasons to believe that - while US AI Labs are very much subordinate the the US government policy and more and more as AI becomes a key military competitive issue, a bold democratic global governance - inclusive of such a Global Public Benefit AI Lab - could not only attract many top AI talents based on its superior mission, but it could also attract close collaboration or full participation by some leading US AI Labs and other states.