Testimonials
AI has massive potential for societies and economies. Yet amid all the hype we must move forward carefully and deliberately to ensure we reap the benefits while managing the risks. The coalition will play an important role in this process.
- Jennifer Blanke, former Chief Economist of the World Economic Forum
I strongly believe in the mission and goal of this coalition, and believe in the urgency to establish broadly adopted norms to restrain research and development which could realize AGI/AI operating against human values and interests, potentially leading to significant human existential threats. Given this imperative, it is very much my desire to contribute my time and experiences/expertise towards the advancement of these efforts.
- Mark Barwinski, former Global Head of Cyber Operations at UBS and former official at TAO Unit of the US National Security Agency.
This Coalition responds on behalf of civil society to the greatest challenge humanity has faced since the first nuclear explosion in the New Mexico desert. AI presents opportunities for enhancing the human condition alongside grave risks. I strongly believe in the mission and goal of this coalition, and believe in the urgency to establish broadly adopted norms to restrain research and development which could realize AGI/AI operating against human values and interests, potentially leading to significant human existential threats. Given this imperative, it is very much my desire to contribute my time and experiences/expertise towards the advancement of these efforts.
- Richard Falk. Emeritus Professor at Princeton University. Leading global expert in global governance democratization.
Given that the Baruch Plan was a diplomatic failure, I don’t think it’s a good positive example to copy. The Montreal Protocol is a better precedent. But, in broad strokes, I do see an international agreement of some kind that sets limits on certain directions of AI R&D to be very likely necessary for humanity’s survival, so we do have that as common ground. I would like to offer 2 hours per month of my time, if it is useful for you, to give advice from a technical perspective, as I direct one of the largest state-level research efforts to construct safe AI.
- David “Davidad” Dalrymple, Program Director at UK’s ARIA and lead coordinating scientist behind the Guaranteed Safe AI movement.
Globally coordinated action to provide guardrails against dangerous deployment of advanced AI is one of civilisation's highest priorities
- David Wood, Chair of the London Futurists
Having worked in biological and chemical weapons control, the best time for conversations about AI governance was a decade ago. The next best time is now. We urgently need international meaningful engagement that is not funded nor driven by tech companies to ensure that we create a fair, sustainable and equitable future.
- Kobi Leins. Fellow of the United Nations Institute for Disarmament Research, and advisory board member of the Carnegie Artificial Intelligence and Equality Initiative (AIEI).
"The fair, open and responsible use of AI technology is increasingly challenged by rapid technological progress without sufficient focus on AI safety and accountability. The Coalition's ambition not only to manage potential risks, but also to unlock innovation and realise opportunities, shows a strong commitment to the responsible use of AI. In particular, the establishment of the Global Public Benefit AI Lab is a key pillar to ensure equitable access to key technologies, skills and can enable technology alignment globally.
- Luka Poehler. Lead AI Solution engineer at Leading EU AI Lab. Member Of The Board Of Advisors of the Carnegie Council for Ethics in International Affairs. Previously at UN and McKinsey.