Linear Digressions

Not every deep learning paper is great. Is that a problem?

Linear Digressions

Deep learning is a field that’s growing quickly. That’s good! There are lots of new deep learning papers put out every day. That’s good too… right? What if not every paper out there is particularly good? What even makes a paper good in the first place? It’s an interesting thing to think about, and debate, since there’s no clean-cut answer and there are worthwhile arguments both ways. Wherever you find yourself coming down in the debate, though, you’ll appreciate the good papers that much more. Relevant links: https://blog.piekniewski.info/2018/07/14/autopsy-dl-paper/ https://www.reddit.com/r/MachineLearning/comments/90n40l/dautopsy_of_a_deep_learning_paper_quite_brutal/ https://www.reddit.com/r/MachineLearning/comments/agiatj/d_google_ai_refuses_to_share_dataset_fields_for_a/

Next Episodes

Linear Digressions

The Assumptions of Ordinary Least Squares @ Linear Digressions

📆 2019-02-04 00:24 / 00:25:07


Linear Digressions

Quantile Regression @ Linear Digressions

📆 2019-01-28 02:27 / 00:21:46


Linear Digressions

Heterogeneous Treatment Effects @ Linear Digressions

📆 2019-01-21 00:57 / 00:17:24



Linear Digressions

Facial Recognition, Society, and the Law @ Linear Digressions

📆 2019-01-07 03:03 / 00:42:46