Linear Digressions

SHAP: Shapley Values in Machine Learning

Linear Digressions

Shapley values in machine learning are an interesting and useful enough innovation that we figured hey, why not do a two-parter? Our last episode focused on explaining what Shapley values are: they define a way of assigning credit for outcomes across several contributors, originally to understand how impactful different actors are in building coalitions (hence the game theory background) but now they're being cross-purposed for quantifying feature importance in machine learning models. This episode centers on the computational details that allow Shapley values to be approximated quickly, and a new package called SHAP that makes all this innovation accessible.

Next Episodes


Linear Digressions

AutoML @ Linear Digressions

📆 2018-04-30 04:50 / 00:15:24


Linear Digressions

CPUs, GPUs, TPUs: Hardware for Deep Learning @ Linear Digressions

📆 2018-04-23 04:52 / 00:12:40


Linear Digressions

A Technical Introduction to Capsule Networks @ Linear Digressions

📆 2018-04-16 03:12 / 00:31:28


Linear Digressions

A Conceptual Introduction to Capsule Networks @ Linear Digressions

📆 2018-04-09 03:59 / 00:14:05