Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?
Center for the Future Mind Center for the Future Mind
1.48K subscribers
56,700 views
0

 Published On Streamed live on Apr 19, 2023

Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University, Join us for an interactive Q&A with Yudkowsky about Al Safety!

Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4 Dr. Mark Bailey of National Intelligence University will moderate the discussion.

An open letter published on March 22, 2023 calls for "all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4." In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligentAl.

Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment.
Dr. Mark Bailev is the Chair of the Cvber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.

show more

Share/Embed