Read the full transcript here.
How has utilitarianism evolved from early Chinese Mohism to the formulations of Jeremy Bentham and John Stuart Mill? On what points did Bentham and Mill agree and disagree? How has utilitarianism shaped Effective Altruism? Does utilitarianism only ever evaluate actions, or does it also evaluate people? Does the "veil of ignorance" actually help to build the case for utilitarianism? What's wrong with just trying to maximize expected value? Does acceptance of utilitarianism require acceptance of moral realism? Can introspection change a person's intrinsic values? How does utilitarianism intersect with artificial intelligence?
Tyler John is a Visiting Scholar at the Leverhulme Centre for the Future of Intelligence and an advisor to several philanthropists. His research interests are in leveraging philanthropy for the common good, ethics for advanced AI, and international AI security. Tyler was previously the Head of Research and Programme Officer in Emerging Technology Governance at Longview Philanthropy, where he advised philanthropists on over $60m in grants related to AI safety, biosecurity, and long-term economic growth trajectories. Tyler earned his PhD in philosophy from Rutgers University — New Brunswick, where he researched mechanism design to promote the interests of future generations, political legitimacy, rights and consequentialism, animal ethics, and the foundations of cost-effectiveness analysis. Follow him on X / Twitter at @tyler_m_john.
Further reading
Staff
Music
Affiliates
📆 2025-08-07 01:59 / ⌛ 01:35:19
📆 2025-07-31 01:59 / ⌛ 01:31:28
📆 2025-07-24 01:59 / ⌛ 01:07:33
📆 2025-07-17 01:59 / ⌛ 01:19:00
📆 2025-07-11 01:59 / ⌛ 01:25:49