The Global Public Benefit AI Lab
(This text details the needed functions, aims, and challenges of one of the 3 agencies of the proposed global IGO for AI, with the aim to inform the Mandate for the Open Intergovernmental Constituent Assembly for AI)
The Global Public Benefit AI Lab and ecosystem will constitute a public-private consortium, at costing eventually least several hundred billion dollars, that will aim to achieve in a very short time a solid global decentralized leadership or co-leadership in human-controllable AI capability, technical alignment research, and AI safety measures. It will incrementally accrue the capabilities and resources of member states and distributes dividends and control to member states and directly to its citizens, all the while stimulating and safeguarding private initiatives for innovation and oversight.
___
The Mandate of the Open Intergovernmental Constituent Assembly for AI envisioned by the Coalition to realize a "Baruch Plan for AI" will need to include the creation of a state-of-the-art global AI lab to jointly develop, regulate, and the most capable safe AI technologies - akin to what the Baruch Plan foresaw for atomic technologies in 1946. We will refer to such a lab as the Global Public Benefit AI Lab (or "Lab").
The Lab will be an open, public-private, democratically-governed consortium, aimed at achieving and sustaining a solid global leadership or co-leadership in human-controllable AI capabilities, technical alignment research, and AI safety measures. It can be conceived as a sort of "global Airbus for AI", or a much stronger and extensive global "CERN for AI". It will cost at least several hundred billion dollars over a few years to achieve financial viability.
Why is such a Lab needed to globally master the risks of AI for human safety and for an unaccountable concentration of power and wealth, and realize its positive potential?
As for the Baruch Plan in 1946, the joint pursuit and exploitation of such capabilities became a major incentive for states to adhere to extensive non-proliferation rules;
Globally regulating and enforcing bans on dangerous AIs may require expertise that can only be achieved by developing the most advanced AI while doing so safely;
Only by realizing a Global AI Lab can we ensure that the benefits of safe AI will be widely and equitably shared.
The inclusion of such a Lab in the Mandate of the Assembly, and its early development even before the Assembly is held, is pivotal to entice small and medium states to join, by providing them with massive economic and digital sovereignty advantages to incentivize their participation, even before AI superpowers join in. Regional IGOs could have a key role in managing shared facilities, infrastructure, and data of the Lab located in their region.
Such a Lab would be synergic with and dependent on the IT Security Agency and AI Safety Agencies, also foreseen in the Mandate of the Constituent Assembly, to ensure (1) its utmost safety and accountability, and (2) prevent it (in addition to its openness) from becoming just another frontier AI initiative in a reckless AI race to the brink.
At A Glance
The Lab will accrue capabilities and resources of member states and participating firms, redistributing dividends and control to both member states and citizens directly while stimulating and safeguarding the initiative of state and private firms for innovation and oversight.
The Lab is one of three agencies of a new intergovernmental organization foreseen by the Mandate of the Open Intergovernmental Constituent Assembly for AI. This venture aims to catalyze a critical mass of globally diverse states in a global constituent process to build a new democratic IGO and a consortium to jointly build the most capable safe AI technologies, mitigate the harm of misuse of existing tools, and reliably ban unsafe ones. This opportunity is open to all states and firms, allowing them to participate on equal footing.
The Lab will have the goal of building safe, reliable, and controllable AI systems– which can be used to advance scientific understanding and improve human well-being while remaining entirely under human control– without developing Superintelligence – which by its nature will behave unpredictably and pose immense catastrophic risks to the human race. While the Lab will be the leading source of safe AI innovation, it will be under strict protocols not to contribute to unsafe AI development even indirectly.
The Lab will aim to achieve and sustain a resilient "mutual dependency" within its broader supply chain relative to superpowers and future public-private consortia. This will be accomplished through joint investments, diplomacy, trade relations, and strategic industrial assets of participant states. Additionally, the Lab will remain open to merging with these entities into a unified global organization and a unified global AI lab to ensure AI remains under human control.
The Lab will require an investment of at least several hundred billion dollars, primarily funded via project financing, buttressed by pre-licensing and pre-commercial procurement agreements from participating states and client firms.
Precedents and Models
The initiative draws inspiration from the $20 billion International Thermonuclear Experimental Reactor (ITER), a multi-national consortium focused on nuclear fusion energy. Unlike ITER, the Lab will work on current state-of-the-art generative AI technologies. These technologies are known and expected to yield massive exploitable capabilities and substantial economic benefits within a predictable timeframe.
Our initiative is also inspired by the CERN, a joint venture established in 1954 by European states to advance nuclear energy capabilities, which later welcomed participation from non-EU countries. With an annual budget of $1.2 billion, CERN serves as a model along with other multinational infrastructure projects ranging from dam constructions to the International Space Station (ISS).
Yet, the most appropriate inspiration for the Lab, and for the Initiative more broadly, is the 1946 Baruch Plan. This proposal by the United States to the United Nations suggested the formation of a global multinational consortium and organization to centralize the development, and research of nuclear weapons and nuclear energy generation, nearly achieving its ambitious goals. Perhaps the most fitting example is AirBus, we're building a global Airbus for AI.
Financial Viability and the Project Finance Model
The Lab will generate revenue from governments, firms, and citizens via licensing of enabling back-end services and IP, providing access to a curated, compliant, and globally representative database, leasing of infrastructure, direct services, and issuance of compliance certifications. With pre-commercial procurement agreements being the largest revenue driver at launch, significant additional revenue streams from the private sector and citizens are expected after the launch, exceeding several tens of billions after 3 to 4 years with strong growth rates.
atabase, leasing of infrastructure, direct services, and issuance of compliance certifications. With pre-commercial procurement agreements being the largest revenue driver at launch, significant additional revenue streams from the private sector and citizens are expected after the launch, exceeding several tens of billions after 3 to 4 years with strong growth rates.
Fig.: Revenue breakdown for the Global Public Benefit AI Lab from a base scenario highlighting minimum required member contributions to cover yearly costs, until additional revenue streams lead to break even (after six years)
Considering the proven scalability, added value, and profit potential of current open-source LLM technologies, coupled with the potential for pre-commercial procurement contracts with states to support its financial viability, the initial funding for this project could predominantly adapt to the project finance model. This funding could be sourced through sovereign and pension funds, intergovernmental sovereign funds, sovereign private equity, and private international finance.
Approximately two-thirds of the first year's investment will be focused on hardware buildup, model training, and hosting. The remainder of the budget will be divided roughly equally between data acquisition, hardware research, and salaries. The minimum annual investment from member states will be around 0.8% of their GDP. The undue influence on the governance of private funding sources will be limited via various mechanisms, including non-voting shares.
Fig.: Yearly cost and revenue streams of the Global Public Benefit AI Lab from a base scenario with the biggest cost driver due to model training and infrastructure build-up
A Public-Private Partnership Model
Participant AI labs will contribute their expertise, workforce, and portion of their Intellectual Property including curated data in a manner that promotes their dual objectives: benefiting humanity and enhancing their stock valuations. Additionally, they will maintain their ability at both the foundational and application level, all within established safety parameters.
Participant AI labs would join as innovation and go-to-market partners, within a consortium democratically managed by the participant states. This arrangement is open to all labs and states. Early participants will be granted considerable, but temporary, economic and decision-making advantages:
As innovation partners and IP providers, they will be compensated via revenue share, secured via long-term pre-licensing, access to curated data and compute resources, and pre-commercial procurement agreements from participating states and firms.
As go-to-market partners, they will gain permanent access to the core AI capabilities, infrastructure, services, and IP developed by the Lab.
These capabilities and IP will aspirationally far outcompete all others in capabilities and safety, and be unique in the actual and perceived trustworthiness of their safety, security, and democratic accountability.
Participant AI Labs will maintain the freedom to innovate at both the base and application levels and retain their ability to offer services to states, firms, and consumers, within some limits.
The partnership terms for AI labs will be strategically designed to maximize the potential for consistent growth in their market valuation. This approach aims to attract the involvement of AI labs, including major technology firms, typically structured as US for-profit entities. These firms are legally required to prioritize maximizing shareholder value, as mandated by their CEOs. The proposed structure ensures alignment with their governance models and incentivizes their participation.
This framework will enable such labs to continue to innovate in both capabilities and safety across foundational and application layers while avoiding the unacceptable risk of developing uncontrollable superintelligence. It is designed to steer clear of race-to-the-bottom scenarios among states and labs, thereby advancing both their mission and market valuation in a structured and responsible manner.
Size of the Initial Funding
Given the cost of state-of-the-art LLM "training runs" are expected to grow at least 3-700% annually, and several leading US AI labs have announced billion-dollar LLM training runs for next year, and likely ten billion ones for the next, the Lab would require an initial investment of at least several hundred billion dollars.
This substantial funding is essential for the Lab to effectively meet its capacity and safety goals, and achieve financial independence within 3-4 years. To maintain its position at the forefront of technology, this amount will need to increase substantially for every year it is delayed unless significantly more efficient and advanced AI training and inference architectures become available or are discovered by the Lab.
Supply-Chain Viability and Control
Acquiring and sustaining access to the specialized AI chips, essential for running leading-edge large language model (LLM) training runs, is expected to be challenging due to the anticipated surge in global demand and the implementation of export controls.
This risk can likely be mitigated through joint diplomatic dialogue that emphasizes open and democratic principles of the initiative. Additionally, the initiative can further secure its position by engaging states that host firms developing advanced AI chip designs or rare or unique AI chip fabrication equipment (such as Netherlands, Germany, and South Korea), or possibly start pursuing its own AI chip designs and manufacturing capabilities. Investing in newer, safer, and more powerful AI software and hardware architectures, beyond large language models, will also strengthen the initiative’s technological foundation and resilience.
Ensuring that member states have access to adequate and climate-neutral energy sources, appropriate data centers, and resilient network architecture, will require timely, speedy, and coordinated action for the short term and careful planning for the long term.
Consequently, the Lab will seek to achieve and sustain a resilient "mutual dependency" in its wider supply chain vis-a-vis superpowers and future public-private consortia. This will be pursued through joint investments, diplomatic engagements, trade relations, and strategic industrial assets of participant states. Additionally, the Lab remains open to merging with these entities into a unified global organization and a single global AI lab, dedicated to achieving AI that is narrow and safe, yet transformative, as detailed in this recent article on The Yuan by the author.
Talent Attraction Feasibility
Achieving and maintaining a decisive advantage in advanced AI capability and safety relies on attracting and retaining Top AI talents and experts. This is particularly crucial if, or until AI superpowers and their firms become involved. Success in this area entails not only engaging individuals but also securing the participation of leading AI labs.
Talent attraction in leading-edge AI is driven by compensation, social recognition, and mission alignment and requires very high security and confidentiality.
Staff will be paid at their current global market rate, and their social importance will be highlighted through proper communication. Member states will be mandated to support top-level recruitment and to enact laws that ensure that knowledge gained is not leaked. Additionally, staff selection and oversight procedures will surpass the rigor found in the most critical nuclear and bio-labs facilities.
The unique mission and democratic nature of the Lab would likely have a strong chance of being perceived by most top global AI researchers, even in non-member states, as being ethically superior to others. This perception mirrors how Open AI originally, and Meta more recently, successfully attracted top talent through their commitment to an "open-source" ethos. This advantage is particularly significant given the existing concerns regarding the trustworthiness of the leadership and governance structures of leading US AI labs.
Just as OpenAI attracted top talent from Deepmind due to a mission and approach perceived as superior, top talents from OpenAI went on to create Anthropic for the same reasons. The Lab should be able to attract top talents as the next "most ethical" AI project. An aspired positive side-effect would be the proliferation of the Lab’s spirit for collaborative, mission-driven, and ethical AI development into the tech community by participation in conferences, offering research stays, and members’ engagement in open source projects.
Substantial risks of authoritarian political shifts in some AI superpowers, as warned (1.5 min video clip) by Yoshua Bengio, could attract top talents to join the Global AI Lab to avoid their work being instrumental to a future national or global authoritarian regime.
Open Source, "Translucency" and Public Safety
The new organization will need to define its approach to the public availability of source designs of critical AI technologies. The latter can bring huge advantages and immense risks, depending on the circumstances, and needs to be carefully regulated, but is currently framed in the public debate, quite idiotically, as a binary "all open" or "all closed" choice.
A sensible approach to open source, we believe, will be to require it in nearly all critical software and hardware stacks of an AI system or service, as a complement to extremely trustworthy and transparent socio-technical systems and procedures around them.
Yet, open source is insufficient to ensure that the code is both sufficiently trustworthy and widely trusted by users, states, and citizens. Therefore, all source codes of critical components will also be required to undergo an extreme level of security review in relation to complexity and intended use, performed by a diverse set of incentive-aligned experts. Identified challenges during reviews could be directly addressed by Lab Members by contributing to open-source projects and engaging with the community.
Also, none of the current open source licenses (not even the GNU GPLV3 Affero License) requires that those running open source code on a server infrastructure - such as an AI lab providing its service via apps, web interface, or API - to provide publicly a (sufficiently trustworthy) proof that the copy of the code downloaded by an end-user matches that which is being used. This needs to be ensured.
That said, there will be exceptions for components whose proliferation could cause very substantial safety risks, such as dangerously powerful LLM weights, which could not only be published but also hacked or leaked.
The trustworthiness of such components, as well as the safety of their public availability, should be managed via a very carefully designed "translucent oversight process", similar to the national legislative committee tasked to review highly classified information, but in an intergovernmental fashion and with much more resilient safeguards for procedural transparency, democratic accountability, and abuse prevention.
This translucent oversight process will aim to maximize effective and independent review of source code, model weights, and datasets by a selected and controlled set of expert delegates of states and independent ethical researchers to maximize actual and perceived trustworthiness by states and citizens.
Read More
If you agree and support the above, we have a 70-plus page Case for a Coalition for a Baruch Plan for AI (live draft v.1.0) PDF. in the works with over a dozen contributors - as well as a 500-words Open Call for a Coalition for a Baruch Plan for AI - being finalized by December 10th. To review them or contribute email us at cbpai@trustlesscomputing.org or join us by filling one of the forms below.
Join Us
We invite you to join us: