Back to DrivenData | Blog

Scoring differences

To test the improvement of my models and have used
from sklearn.metrics import log_loss

which I believe is the measure also used for the submission and get different results. For example, using log_loss I get 0.11 and my submission score is 0.38 roughly. Can anyone explain why that is? Am I missing something?

Thank you!

Are you using train_test_split to test?

This would explain why the log loss score is different when you test v/s when you submit.

During submission, the score is computed against actual results whereas when you test, you would be using training data.

Thank you! Is it usual that the difference would be large between using train_test_split and the actual data?

This would really depend on your model. If you have a very accurate one, it will perform well on the actual data as well.