Annotator Bias

Data Skeptic

The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on.  Folk wisdom estimates used to be around 100k documents were required for effective training.  The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.

Next Episodes

Data Skeptic

Annotator Bias @ Data Skeptic

📆 2019-11-23 01:00


Data Skeptic

NLP for Developers @ Data Skeptic

📆 2019-11-22 07:20


Data Skeptic

NLP for Developers @ Data Skeptic

📆 2019-11-22 07:20


Data Skeptic

NLP for Developers @ Data Skeptic

📆 2019-11-20 01:00