These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today's episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy.
Danny and I talk about how to understand his team's results and what they mean (and don't mean) for how we should think about progress in AI going forward.
Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field.
If this research sounds appealing, you might be interested in applying to join OpenAI's Foresight team — they're currently hiring research engineers.
In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including:
• The question of which experts to believe
• Danny's journey to working at OpenAI
• The usefulness of "decision boundaries"
• The importance of Moore's law for people who care about the long-term future
• What OpenAI's Foresight Team's findings might imply for policy
• The question whether progress in the performance of AI systems is linear
• The safety teams at OpenAI and who they're looking to hire
• One idea for finding someone to guide your learning
• The importance of hardware expertise for making a positive impact
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript.
Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.
📆 2020-05-19 01:32 / ⌛ 01:37:05
📆 2020-05-12 21:45 / ⌛ 00:26:46
📆 2020-05-09 01:43 / ⌛ 01:53:00
📆 2020-04-28 16:45 / ⌛ 02:13:06
📆 2020-04-17 19:20 / ⌛ 02:37:17