Linear Digressions

KL Divergence

Linear Digressions

Kullback Leibler divergence, or KL divergence, is a measure of information loss when you try to approximate one distribution with another distribution. Β It comes to us originally from information theory, but today underpins other, more machine-learning-focused algorithms like t-SNE. Β And boy oh boy can it be tough to explain. Β But we're trying our hardest in this episode!

Next Episodes

Linear Digressions

Sabermetrics @ Linear Digressions

πŸ“† 2017-07-31 03:15 / βŒ› 00:25:48


Linear Digressions

What Data Scientists Can Learn from Software Engineers @ Linear Digressions

πŸ“† 2017-07-24 03:52 / βŒ› 00:23:46


Linear Digressions

Software Engineering to Data Science @ Linear Digressions

πŸ“† 2017-07-17 04:36 / βŒ› 00:19:05


Linear Digressions

Re-Release: Fighting Cholera with Data, 1854 @ Linear Digressions

πŸ“† 2017-07-10 02:19 / βŒ› 00:12:04


Linear Digressions

Re-Release: Data Mining Enron @ Linear Digressions

πŸ“† 2017-07-02 19:53 / βŒ› 00:32:16