Why governing AI is crucial to human survival | Allan Dafoe | Big Think
Big Think Big Think
6.79M subscribers
31,549 views
0

 Published On Jan 8, 2021

Why governing AI is crucial to human survival
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world's top minds at Big Think Edge: https://bigth.ink/Edge
----------------------------------------------------------------------------------
The question of conscious artificial intelligence dominating future humanity is not the most pressing issue we face today, says Allan Dafoe of the Center for the Governance of AI at Oxford's Future of Humanity Institute. Dafoe argues that AI's power to generate wealth should make good governance our primary concern.

With thoughtful systems and policies in place, humanity can unlock the full potential of AI with minimal negative consequences. Drafting an AI constitution will also provide the opportunity to learn from the mistakes of past structures to avoid future conflicts.

Building a framework for governance will require us to get past sectarian differences and interests so that society as a whole can benefit from AI in ways that do the most good and the least harm.
----------------------------------------------------------------------------------
ALLAN DAFOE:

Allan Dafoe is an associate professor in the International Politics of AI and director of the Centre for the Governance of AI at the Future of Humanity Institute at University of Oxford. He specializes in AI governance, AI race dynamics, and AI international politics. Dafoe's prior work centered around examinations of the causes of The Liberal Peace, and the role of reputation and honor as motives for war.
----------------------------------------------------------------------------------
TRANSCRIPT:

ALLAN DAFOE: AI is likely to be a profoundly transformative general purpose technology that changes virtually every aspect of society, the economy, politics, and the military. And this is just the beginning. The issue doesn't come down to consciousness or "Will AI want to dominate the world or will it not?" That's not the issue. The issue is: "Will AI be powerful and will it be able to generate wealth?" It's very likely that it will be able to do both. And so just given that, the governance of AI is the most important issue facing the world today and especially in the coming decades.

My name is Allan Dafoe, I am the director of the Center for the Governance of AI at the Future of Humanity Institute at University of Oxford. The core part of my research is to think about the governance problem with respect to AI. So this is the problem of how the world can develop AI in a way that maximizes the benefits and minimizes the risks.

NARRATOR: So why is it so important for us to govern artificial intelligence? Well, first, let's just consider how natural human intelligence has impacted the world on its own.

DAFOE: In many ways it's incredible how far we've gone with human intelligence. This human brain, which had all sorts of energy constraints and physical constraints, has been able to build up this technological civilization, which has produced cellphones and buildings, education, penicillin, and flight. Virtually everything that we have to be thankful for is a product of human intelligence and human cooperation. With artificial intelligence, we can amplify that and eventually extend it beyond our imagination. And it's hard for us to know now what that will mean for the economy, for society, for the social impacts and the possibilities that it will bring.

NARRATOR: AI isn't the first technology that our society has had to grapple with how to govern. In fact, many technologies like cars, guns, radio, the internet are all subject to governance. What sets AI apart is the kind of impact it can have on society and on every other technology it touches.

DAFOE: So if we govern AI well, there's likely to be substantial advances in medicine, transportation, helping to reduce global poverty and [it will] help us address climate change. The problem is if we don't govern it well, it will also produce these negative externalities in society. Social media may make us more lonely, self-driving cars may cause congestion, autonomous weapons could cause risks of flash escalations and war or other kinds of military instability. So the first layer is to address these unintended consequences of the advances in AI that are emerging. Then there's this bigger challenge facing the governance of AI, which is really the question of where do we want to go?

NARRATOR: The way we structure our governance of AI is crucial, possibly to the survival of our species. When we consider how impactful this technology can be, any system that governs its use must be carefully constructed.

DAFOE: There are many examples where a society has stumbled into very harmful situations—World War I perhaps being one of the more illustrative ones—where no one leader really wanted to have this war but, nevertheless, they were...

To read the full transcript, please visit https://bigthink.com/videos/ai-govern...

show more

Share/Embed