Linear Digressions

Zeroing in on what makes adversarial examples possible

Linear Digressions

Adversarial examples are really, really weird: pictures of penguins that get classified with high certainty by machine learning algorithms as drumsets, or random noise labeled as pandas, or any one of an infinite number of mistakes in labeling data that humans would never make but computers make with joyous abandon. What gives? A compelling new argument makes the case that it’s not the algorithms so much as the features in the datasets that holds the clue. This week’s episode goes through several papers pushing our collective understanding of adversarial examples, and giving us clues to what makes these counterintuitive cases possible. Relevant links: https://arxiv.org/pdf/1905.02175.pdf https://arxiv.org/pdf/1805.12152.pdf https://distill.pub/2019/advex-bugs-discussion/ https://arxiv.org/pdf/1911.02508.pdf

Next Episodes

Linear Digressions

Unsupervised Dimensionality Reduction: UMAP vs t-SNE @ Linear Digressions

πŸ“† 2020-01-13 01:53 / βŒ› 00:29:34


Linear Digressions

Data scientists: beware of simple metrics @ Linear Digressions

πŸ“† 2020-01-05 23:54 / βŒ› 00:24:47


Linear Digressions

Communicating data science, from academia to industry @ Linear Digressions

πŸ“† 2019-12-30 02:53 / βŒ› 00:26:15


Linear Digressions

Optimizing for the short-term vs. the long-term @ Linear Digressions

πŸ“† 2019-12-23 03:50 / βŒ› 00:19:24