Linear Digressions

Neural Net Dropout

Linear Digressions

Neural networks are complex models with many parameters and can be prone to overfitting.  There's a surprisingly simple way to guard against this: randomly destroy connections between hidden units, also known as dropout.  It seems counterintuitive that undermining the structural integrity of the neural net makes it robust against overfitting, but in the world of neural nets, weirdness is just how things go sometimes. Relevant links: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf

Next Episodes

Linear Digressions

Disciplined Data Science @ Linear Digressions

📆 2017-09-25 03:49 / 00:29:34


Linear Digressions

Hurricane Forecasting @ Linear Digressions

📆 2017-09-18 03:37 / 00:27:57


Linear Digressions

Finding Spy Planes with Machine Learning @ Linear Digressions

📆 2017-09-11 04:11 / 00:18:09


Linear Digressions

Data Provenance @ Linear Digressions

📆 2017-09-04 03:35 / 00:22:48


Linear Digressions

Adversarial Examples @ Linear Digressions

📆 2017-08-28 04:25 / 00:16:11