Linear Digressions

Thresholdout: Down with Overfitting

Linear Digressions

Overfitting to your training data can be avoided by evaluating your machine learning algorithm on a holdout test dataset, but what about overfitting to the test data? Turns out it can be done, easily, and you have to be very careful to avoid it. But an algorithm from the field of privacy research shows promise for keeping your test data safe from accidental overfitting

Next Episodes

Linear Digressions

The State of Data Science @ Linear Digressions

📆 2015-11-10 05:36 / 00:15:40



Linear Digressions

Kalman Runners @ Linear Digressions

📆 2015-10-29 04:10 / 00:14:42


Linear Digressions

Neural Net Inception @ Linear Digressions

📆 2015-10-23 04:25 / 00:15:19


Linear Digressions

Benford's Law @ Linear Digressions

📆 2015-10-16 05:30 / 00:17:42