Linear Digressions

Multi - Armed Bandits

Linear Digressions

Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner! Relevant link: https://support.google.com/analytics/answer/2844870?hl=en

Next Episodes

Linear Digressions

Experiments and Messy, Tricky Causality @ Linear Digressions

📆 2016-03-04 04:54 / 00:16:59


Linear Digressions

Backpropagation @ Linear Digressions

📆 2016-02-29 04:58 / 00:12:21


Linear Digressions

Text Analysis on the State Of The Union @ Linear Digressions

📆 2016-02-26 04:51 / 00:22:22


Linear Digressions

Paradigms in Artificial Intelligence @ Linear Digressions

📆 2016-02-22 05:32 / 00:17:20


Linear Digressions

Survival Analysis @ Linear Digressions

📆 2016-02-19 04:44 / 00:15:21