Read the full transcript here.
Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?
Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.
Staff
Music
Affiliates
📆 2021-05-20 01:59 / ⌛ 01:32:52
📆 2021-05-17 01:59 / ⌛ 01:30:42
📆 2021-05-13 01:59 / ⌛ 01:16:09
📆 2021-05-09 01:59 / ⌛ 01:52:45
📆 2021-05-06 01:59 / ⌛ 01:40:39