Calculating Feature Importance<

Data Skeptic

For machine learning models created with the random forest algorithm, there is no obvious diagnostic to inform you which features are more important in the output of the model. Some straightforward but useful techniques exist revolving around removing a feature and measuring the decrease in accuracy or Gini values in the leaves. We broadly discuss these techniques in this epis

Next Episodes

Data Skeptic

Crawling with AWS Lambda @ Data Skeptic

📆 2016-10-18 02:00


Data Skeptic

NYC Bike Sharing Rebalancing @ Data Skeptic

📆 2016-10-14 02:00


Data Skeptic

Random Forest @ Data Skeptic

📆 2016-10-07 02:00


Data Skeptic

Election Predictions @ Data Skeptic

📆 2016-09-30 02:00


Data Skeptic

Election Predictions @ Data Skeptic

📆 2016-09-30 02:00