Linear Digressions

The Language Model Too Dangerous to Release

Linear Digressions

OpenAI recently created a cutting-edge new natural language processing model, but unlike all their other projects so far, they have not released it to the public. Why? It seems to be a little too good. It can answer reading comprehension questions, summarize text, translate from one language to another, and generate realistic fake text. This last case, in particular, raised concerns inside OpenAI that the raw model could be dangerous if bad actors had access to it, so researchers will spend the next six months studying the model (and reading comments from you, if you have strong opinions here) to decide what to do next. Regardless of where this lands from a policy perspective, it’s an impressive model and the snippets of released auto-generated text are quite impressive. We’re covering the methodology, the results, and a bit of the policy implications in our episode this week.

Next Episodes

Linear Digressions

The cathedral and the bazaar @ Linear Digressions

πŸ“† 2019-03-17 23:47 / βŒ› 00:32:36


Linear Digressions

AlphaStar @ Linear Digressions

πŸ“† 2019-03-11 02:18 / βŒ› 00:22:03


Linear Digressions

Are machine learning engineers the new data scientists? @ Linear Digressions

πŸ“† 2019-03-04 03:57 / βŒ› 00:20:46



Linear Digressions

K Nearest Neighbors @ Linear Digressions

πŸ“† 2019-02-18 00:57 / βŒ› 00:16:25