The Global Public Benefit AI Lab

The Mandate of the envisioned Open Intergovernmental Constituent Assembly for AI being fostered by the Coalition will need to include the creation of an open, public-private, democratically-governed consortium - among a critical mass of diverse states and AI labs - aimed at achieving and sustaining a solid global leadership or co-leadership in human-controllable AI capabilities, technical alignment research and AI safety measures. We will refer to it as the Global Public Benefit AI Lab

Why is such a Lab needed to global master the risks of AI for human safety and for unaccountable concentration of power and wealth, and realize its positive potential?

  • Globally regulating and enforcing bans on dangerous AIs requires expertise that can only be accrue by developing the most advance AI while doing so safely.

  • Only by realizing a Global AI Lab can we ensure that the benefits of safe AI will be widely and equitably shared.

  • The development of such a Lab by early joining states would be key to entice other states to join, and ensure massive economic and digital sovereginty advantages to incentivize their participation.

At A Glance

The Lab will accrue capabilities and resources of member states and participating firms, redistributing dividends and control to both member states and citizens directly, while stimulating and safeguarding the initiative of state and private firms for innovation and oversight. 

The Lab is one of three agencies of a new intergovernmental organization foresee by the Mandate of the Constituent Assembly. This venture aims to catalyze a critical mass of globally-diverse states in a global constituent process to build a new democratic IGO and a consortium to jointly build the most capable safe AI technologies, and reliably ban unsafe ones. This opportunity is open to all states and firms, allowing them  to participate on equal footing.

The Lab will aim to achieve and sustain a resilient “mutual dependency” within its broader supply chain relative to superpowers and future public-private consortia. This will be accomplished through joint investments, diplomacy, trade relations and strategic industrial assets of participant states. Additionally the Lab will remain open to merging with these entities into a unified global organization and a unified global AI lab to ensure AI remains under human and humanity control.

The Lab will require investment of at least $15 billion, primarily funded via project financing, buttressed by pre-licensing and pre-commercial procurement agreements from participating states and client firms.

Precedents and Models

The initiative draws inspiration from the $20 billion International Thermonuclear Experimental Reactor (ITER), a multi-national consortium focused on nuclear fusion energy. Unlike ITER, the Lab will work on current state-of-the-art generative AI technologies. These technologies are known and expected to yield massive exploitable capabilities and substantial economic benefits within a predictable timeframe. 

Our initiative is also inspired by  the CERN, a joint venture  established in 1954 by European states  to advance nuclear energy capabilities, which later welcomed participation from non-EU countries. With an annual budget of $1.2 billion, CERN serves as a model along with other multinational infrastructure projects ranging from dam constructions to the International Space Station (ISS). 

Yet, the most appropriate inspiration for the Lab, and for the Initiative more broadly, is the 1946 Baruch Plan. This proposal by the United States  to the United Nations suggested the formation of a global multinational consortium and organization to centralize the development, and research of nuclear weapons and nuclear energy generation, nearly achieving its ambitious goals.

Financial Viability and the Project Finance model

The Lab will generate revenue from governments, firms and citizens via licensing of enabling back-end services and IP, leasing of infrastructure, direct services, and issuance of compliance certifications. 

Considering the proven scalability, added value and profit potential of current open source LLMs technologies, coupled with the potential for pre-commercial procurement contracts with states to support its financial viability, the initial funding for this project could predominantly adapt the project finance model. This funding could be sourced through sovereign and pension funds, intergovernmental sovereign funds, sovereign private equity and private international finance. 

The undue influence on the governance of private funding sources will be limited via various mechanisms, including non-voting shares. 

A Public-Private Partnership Model

Participant AI labs will contribute their expertise, workforce and portion of their Intellectual Property in a manner that promotes their dual objectives: benefiting humanity, and enhancing their stock valuations. Additionally, they will maintain their ability at both  the foundational and application level, all within established  safety parameters.

Participant AI labs would join as innovation and go-to-market partners, within a consortium democratically managed by the participant states. This arrangement is open to all labs and states. Early participants will be granted considerable, but temporary, economic and decision making advantages:

  • As innovation partners and IP providers, they will be compensated via revenue share, secured via long-term pre-licensing and pre-commercial procurement agreements from participating states and firms.

  • As go-to-market partners, they will gain permanent access to the core AI/AGI capabilities, infrastructure, services and IP developed by the Lab. 

    • These capabilities and IP will aspirationally far outcompete all others in capabilities and safety, and be unique in the actual and perceived trustworthiness of their safety, security and democratic accountability.

    • Participant AI Labs will maintain the freedom to innovate at both the base and application levels, and retain their ability to offer services to states, firms and consumers, within some limits.

The partnership terms for AI labs will be strategically designed to maximize the potential for a consistent growth in their market valuation. This approach aims to attract involvement of AI labs, including major technology firms,  typically structured as US for profit entities. These firms are legally required to prioritize maximizing shareholder value, as mandated for their CEO’s.  The proposed structure ensures alignment with their governance models and incentivizes their participation. 

This framework will enable such labs to continue to innovate in both capabilities and safety across foundational and application layers. It is designed to steer clear of race to the bottom scenarios among states and labs, thereby advancing both their mission and market valuation in a structured and responsible manner.

Size of the Initial Funding

Given the cost of state-of-the-art LLMs "training runs" are expected to grow 500-1000% annually, and several leading US AI labs have announced billion-dollar LLM training runs for the this year, and likely ten billion ones for the next, the Lab would require an initial investment of at least $15 billion. This substantial funding is essential for the Lab to effectively meet its capacity and safety goals, and achieve financial independence within  3-4 years. To maintain its position at the forefront of technology, this amount will need to increase by 5-10 times annually, unless significantly more efficient and advanced AI architectures become available.

Supply-Chain Viability and Control

Acquiring and sustaining access to the specialized AI chips, essential  for  running leading-edge large language model (LLM) training runs, is expected to  be challenging due to anticipated surge in global demand and the implementation of export controls. 

This risk can likely be mitigated through joint diplomatic dialogue that emphasizes  open and democratic principles of the initiative. Additionally,  the initiative can further secure its position by engaging states that host firms developing advanced AI chips designs or rare or unique AI chips fabrication equipment, or possibly start pursuing its own AI chip designs and manufacturing capabilities. Investing in newer, safer, and more powerful AI software and hardware architectures, beyond large language models, will also strengthen the initiative’s technological foundation and resilience. 

Ensuring that member states have access to adequate energy sources, appropriate data centers, and resilient network architecture, will require timely, speedy and coordinated action for the short term and careful planning for the long term.

Consequently, the Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia. This will be pursued through joint investments, diplomatic engagements, trade relations and strategic industrial assets of participant states. Additionally, the Lab remains open to merging with these entities into a unified  global organization and a single global AI lab, dedicated  to achieving AI - as detailed in this recent article on The Yuan.

Talent Attraction Feasibility

Achieving and maintaining a decisive advantage in advanced AI capability and safety relies on attracting and retaining Top AI talents and experts. This is particularly crucial if, or until AI superpowers and their firms become involved. Success in this area entails not only engaging individuals, but also securing the participation of leading AI labs. 

Talent attraction in leading-edge AI is driven by compensation, social recognition and mission alignment and requires very high security and confidentiality. 

Staff will be paid at their current global market rate, and their social importance will be highlighted through proper communications. Member states will be mandated to support top-level recruitment and to enact laws that ensure that knowledge gained is not leaked. Additionally, staff selection and oversight procedures will surpass the rigor found in f the most critical nuclear and bio-labs facilities.

The unique mission and democratic nature of the Lab would likely have a strong chance of being perceived by most top global AI researchers, even in non-member states, as being ethically superior to others. This perception mirrors how Open AI originally, and Meta more recently, successfully attracted top talent through their commitment to an "open-source" ethos. This advantage is particularly significant given the existing concerns regarding  trustworthiness of the leadership and governance structures of leading US AI labs. 

Just as OpenAI attracted top talent from Deepmind due to a mission and approach perceived as superior, and top talents from OpenAI went on to create Anthropic for the same reasons. The Lab should be able to attract top talents as the next "most ethical" AI project. Substantial risks of authoritarian political shifts in some AI superpowers, as warned (1.5 min video clip) by Yoshua Bengio, could attract top talents to join the Global AI Lab to avoid their work being instrumental to a future national or global authoritarian regimes. 

Open Source, "Translucency" and Public Safety

The new organization will need to define its approach to the public availability of source designs of critical AI technologies. The latter can bring huge advantages and immense risks, depending on the circumstances, and needs to be carefully regulated, but is currently framed in the public debate, quite idiotically, as a binary “all open” or “all closed” choice. 

A sensible approach to ​open source, we believe, will be to require it in nearly all critical software and hardware stacks of an AI system or service, as a complement to extremely trustworthy and transparent socio-technical systems and procedures around them.

Yet, open source is insufficient to ensure that the code is both sufficiently trustworthy and widely trusted by users, states and citizens. Therefore, all source codes of critical components will also be required to undergo an extreme level of security review in relation to complexity, performed by a diverse set of incentive-aligned experts.

Also, none of the current open source licenses (not even the GNU GPLV3 Affero License) requires that those running open source code on a server infrastructure - such as an AI lab providing its service via apps, web interface or API - to provide publicly a (sufficiently trustworthy) proof that the copy of the code downloaded by an end-user matches that which is being used. This needs to be ensured. 

That said, there will be exceptions for components whose proliferation could cause very substantial safety risks, such as dangerously powerful LLM weights, which could not only be published, but also hacked or leaked.

The trustworthiness of such components, as well as the safety of their public availability, should be managed via a very carefully designed "translucent oversight process", similar to the national legislative committee tasked to review highly classified information, but in an intergovernmental fashion and with much more resilient safeguards for procedural transparency, democratic accountability and abuse prevention. 

This translucent oversight process will aim to maximize effective and independent review of the source code by a selected and controlled set of  expert delegates of states and independent ethical researchers to maximize actual and perceived trustworthiness by states and citizens.

Tackling the Superintelligence Option

The governance design of the Lab and the IGO it will be part of, should consider that its choices will be incredibly impactful for the future of humanity. 

Since 2023 three leading US AI labs - Open AI, Google DeepMind and Anthropic - have all repeatedly declared their aim to build AI that surpasses human-level intelligence in all human tasks, without limits, realizing the so-called AGI or Superintelligence. 

A global, reckless winner-take-all race for AGI among a handful of states, and their firms, that started with the launch of ChatPGT in 2022, has shifted gears again in June 2024, this time in pursuit of Superintelligence and without any meaningful global safeguards for immense risks for human safety and immense undemocratic concentration of power. The rapid acceleration of AI capabilities and investments has shortened the timelines for these risks to materialize to just a few years. 

Recent statements by the CEO of NVIDIA that he aims to "turn NVIDIA into one giant AI" were followed last week by the launch of a new venture by the former chief scientist of OpenAI to build safe Superintelligence, and an announcement by the head of a $300 billion public-private fund to be fully dedicated to the same. 

While recognizing the immense risks for safety, these firms are pressing forward because they perceive the competitive dynamics of the race  as unstoppable.  Each firm claims that their unique technical strategies may succeed better than others' in ensuring the technical alignment to prevent "loss of control" or to produce an outcome more beneficial for humanity compared to competing initiatives.

While most of them publicly agree that the majority of the positive scenarios would be those retaining a wide human control over AI, many AI scientists privately consider it possible or likely that loss of human control over AI, so-called AI takeover, under some conditions, may be overall highly beneficial for humanity. 

While this state of things is extremely unsettling, it must be assumed that it is also the intention of the US government, given it has not stopped nor questioned such publicly declared plans. This is likely due to a perceived AI arms race with China, confidence in backend safety guardianship of its national security agencies, or other motives.

Hence, under possible future scenarios, advances in frontier AI safety and alignment or an increased risk of other entities releasing more dangerous superintelligences may lead such IGOs to decide, after wide participation and pondering, that it is overall most beneficial for humanity to substantially relax their "loss of control" safety requirements or even intentionally unleash a Superintelligence.