Mandate of the Constituent Assembly

While the governance structure and statute of the envisioned global intergovernmental organization ("IGO") for AI can not be pre-designed, as they will be the outcome of the proposed Open Intergovernmental Constituent Assembly for AI it is of paramount importance that the Mandate of such an Assembly (in addition to its Rules) is fitting for the kind of organization that is needed. 

The Mandate of such Assembly should ensure that the resulting IGO is (1) properly empowered and resourced, to realize and equitably share the most advanced and safe AI, and reliably ban unsafe ones; and (2) highly federal, neutral, resilient, participatory, democratic, and decentralized, with highly effective checks and balances, and safeguards from degeneration or undue concentration of power.

Given the inherently global nature of those threats and opportunities, the scope of those organizations will necessarily need to include: (1) setting of globally-trusted AI safety standards; (2) development of world-leading safe AI and AI safety capabilities; (3) enforcement of global bans for unsafe AI development and use; and (4) development of globally-trusted governance-support systems.

To ensure it is bold and participatory enough, its name could be derived from the International Atomic Development Authority, the IGO envisioned by the Baruch Plan for nuclear technology, and so therefore be called the International AI Development Authority (or IAIDA).

We provisionally group the required functions in three agencies of a single IGO:

  1. A Global Public Benefit AI Lab and ecosystem will constitute a public-private consortium, costing at least several hundred billion dollars, that will aim to achieve in a very short time a solid global decentralized leadership or co-leadership in human-controllable AI capability, technical alignment research, and AI safety measures. It will accrue capabilities and resources of member states and distribute dividends and control to member states and directly to its citizens, all the while stimulating and safeguarding private initiatives for innovation and oversight;

  2. An AI Safety Agency will set global safety standards and enforce a ban on all development, training, deployment, and research of dangerous AI worldwide to sufficiently mitigate the risk of loss of control or severe abuse by irresponsible or malicious state or non-state entities; 

  3. An IT Security Agency The IT Security Agency will develop radically more secure and globally-trusted "governance-support systems" and their certificiation mechanisms in order to (1) enable the global control and compliance mechanisms to ensure against safety and proliferation risks and to (2) enable effective treaty-making, via widely trusted secure, confidential, pseudonymous and diplomatic communications and deliberations.

(Full details on the Global Public Benefit AI Lab, the AI Safety Agency, and the IT Security Agency are found in the Case for a Coalition for a Baruch Plan for AI)

Far from being a fixed blueprint, such a proposal aims to fill a glaring gap in the availability of detailed and comprehensive proposals, given the absence of comprehensive and far-reaching proposals and analysis of the scope and functions needed of such an organization - with partial exceptions, such as the July 2023 paper by Google DeepMind and other researchers, a comprehensive far-reaching, and well-researched proposal. Other proposals have come from NGOs, such as PauseAI Proposal, A Narrow Path, and AItreaty.org we have taken it upon ourselves to create one below. 

Our proposal aims to stimulate the production of other similarly comprehensive proposals to foster concrete, cogent, transparent, efficient, and timely negotiations among nations, leading up to such an Open Transnational Constituent Assembly for AI, and eventually arrive soon at single-text procedure negotiations. 

Why should we have a single IGO for AI? Given the unique nature and scope of the challenge ahead, it is unfitting to view different models for global AI governance - such as an "IAEA for AI", an Intergovernmental Panel for Climate Change for AI, a CERN for AI, an "International Civil Aviation Organization for AI", or a "global AI lab" - as alternatives to one another, as several such approaches are needed.  Each of such new IGOs for AI would have significant interdependencies, so the analysis of a state's advantage in participating in one IGO must be assessed in relation to its participation in other IGOs. For example, the participation of states in the IAEA, with its commitments to non-proliferation and submittal stringent oversight, was largely achieved by offering access and technical support for harnessing advanced nuclear energy technology.