Testing Goodness of Fit of the Model

Since we are doing logistic regression, I was wondering what you think the best test for “Goodness of Fit” you think is the best with your models.

Hi @dkderden,

The general approach for predictive modeling is to pick a metric in advance to tell us how well or poorly a given model is doing on the classification task at hand. This metric is what we ask our modeling tools to optimize in the course of fitting the model — for logistic regression this is where the optimal weights β are found — and then evaluated on a chunk of data for which the answers are withheld to see how the model does on new examples. (Nice explanation here.)

From the point of view of the competition, the performance of a model is evaluated by a metric called logarithmic loss. This is only one of many possible performance metrics for classification, but the twist with log loss is that it heavily penalizes classifications that are very confident but wrong.

Because we chose this particular metric for the competition, it’s probably the one you’ll want to use in your own exploration, but check out this post on the Dato blog for a discussion of some other ways to do it.

Hope that was helpful, let me know if that was too basic or if it didn’t answer your question.