Reflections
AI Governance Talks co-hosted an event with the Foresight Institute, on Tuesday 4th February.
The topic was defensive acceleration; I was motivated to run this event off the back of a blog post I wrote on the topic.
What is defensive acceleration?
I can’t see a world that we indefinitely constrain the capabilities of advanced AI systems; there are too many jurisdictions, and many dangerous capabilities can also be useful. Nor do I want a world where diffusion is indefinitely controlled – because of the benefits of diffusion, and the risk of power concentration.
What does widespread diffusion of capable AI systems mean for AI safety? Because I think dangerous capabilities will probably proliferate some day, be it open weight, or by the US or China, we need to prepare by building a society that’s robust to the existence of those capabilities.
Defensive acceleration means bringing forward in time defensive interventions, relative to the risk-increasing technologies they defend against. (Read more)
The event attracted a broad audience; from civil servants, to entrepreneurs, to tech workers. What felt clear to me is that people were looking for somewhere to turn. AI is accelerating at an unthinkable pace, and governments are being slow to act. Time’s almost up for AI policy, so what do we do instead? People find hope in defensive acceleration.
Admittedly, I feel slightly conflicted about defensive acceleration. My worst fear is it’s coopted by AI developers as an excuse to shirk responsibility – “defensive technology means we don’t need to regulate, right?”. To be absolutely clear – defensive acceleration complements – rather than replaces – AI policy and legislation.
Takeaways
I didn’t get to speak to everyone, but I had a couple of takeaways from those I did speak to:
- People have energy to put into defensive acceleration. People want somewhere to turn, something to do to help society manage the potential, upcoming AI transformation. We could run more events like this and make much more progress.
- People want examples of defensive acceleration. While our talks were well received, people wanted more examples of defensive acceleration. For two reasons: first to scrutinise whether this idea has any merit. Second, to know what to build.
Event Details
Description:
How can we bolster society against growing risks posed by the widespread diffusion of AI, while staying optimistic about its benefits?
Join us as we explore defensive acceleration – a strategy introduced by Vitalik Buterin in “My Techno Optimism”. Defensive acceleration is about developing technologies that protect us from the biggest threats we face, including pandemics, cybercrime, advanced AI, and nuclear war. It reconciles technological optimism with a serious approach to handling potentially dangerous capabilities, focusing on developing technology that mitigates risks and reliably makes the world better.
This is your chance to contribute your ideas, and to meet others interested in the growing community surrounding defensive acceleration. What technologies are most vital to develop? What policies do we need in place to make it happen?
Speakers:
- James Richards, def/acc Investor at Entrepreneur First
- Jamie Bernardi, AI governance researcher and author of A Policy Agenda for Defensive Acceleration
- Herbie Bradley, PhD Student, Cambridge; formerly UK AI Safety Institute. On what can we expect for the next few years of U.S. AI policy? Implications for defensive acceleration
- Catalin Mitelut, Netholabs. On advancing whole-brain neuroscience for secure AI.
Location: Newspeak House, which is kindly provided its space for this event.
Hosted by:
- Foresight Institute: Supporting the development of transformative technologies to make great futures more likely.
- AI Governance Talks: Bringing together London’s community of policymakers, academics and professionals working on the governance of frontier AI systems.

Leave a comment