Linear Digressions

Game Theory for Model Interpretability: Shapley Values

Linear Digressions

As machine learning models get into the hands of more and more users, there's an increasing expectation that black box isn't good enough: users want to understand why the model made a given prediction, not just what the prediction itself is. This is motivating a lot of work into feature important and model interpretability tools, and one of the most exciting new ones is based on Shapley Values from game theory. In this episode, we'll explain what Shapley Values are and how they make a cool approach to feature importance for machine learning.

Next Episodes

Linear Digressions

AutoML @ Linear Digressions

📆 2018-04-30 04:50 / 00:15:24


Linear Digressions

CPUs, GPUs, TPUs: Hardware for Deep Learning @ Linear Digressions

📆 2018-04-23 04:52 / 00:12:40


Linear Digressions

A Technical Introduction to Capsule Networks @ Linear Digressions

📆 2018-04-16 03:12 / 00:31:28


Linear Digressions

A Conceptual Introduction to Capsule Networks @ Linear Digressions

📆 2018-04-09 03:59 / 00:14:05


Linear Digressions

Convolutional Neural Nets @ Linear Digressions

📆 2018-04-02 03:40 / 00:21:55