Linear Digressions

Model Interpretation (and Trust Issues)

Linear Digressions

Machine learning algorithms can be black boxes--inputs go in, outputs come out, and what happens in the middle is anybody's guess. But understanding how a model arrives at an answer is critical for interpreting the model, and for knowing if it's doing something reasonable (one could even say... trustworthy). We'll talk about a new algorithm called LIME that seeks to make any model more understandable and interpretable. Relevant Links: http://arxiv.org/abs/1602.04938 https://github.com/marcotcr/lime/tree/master/lime

Next Episodes

Linear Digressions

Updates! Political Science Fraud and AlphaGo @ Linear Digressions

📆 2016-04-18 04:48 / 00:31:43


Linear Digressions

Ecological Inference and Simpson's Paradox @ Linear Digressions

📆 2016-04-11 04:43 / 00:18:32


Linear Digressions

Discriminatory Algorithms @ Linear Digressions

📆 2016-04-04 04:30 / 00:15:21


Linear Digressions

Recommendation Engines and Privacy @ Linear Digressions

📆 2016-03-28 04:46 / 00:31:33