This category is for posts about the Richter’s Predictor: Modeling Earthquake Damage competition. Please keep all posts here specific to the competition.
Just one question, why do you want to use F-score instead of ROC AUC to assess the models ?
I noticed the training labels data are much more than the training data. Any reason why, and what do I have to do.
AUC can be used only when you have binary labels. But, we have 3 labels in this problem.
F1 score is basically a balance between Precision and Recall. It has multiple variants(micro, macro, weighted). Macro Averaged F1 Score is the evaluation metric here, which is nothing but, mean of the F1 scores of all classes. It is usually used to get unbiased estimate of multi-class classification problems and also to ensure under-sampled classes don’t have any impact on over-sampled classes in the final result.
No, both of them are 260,601.
How do we interpret the geo_level features? I can’t find anything geographic in Nepal that would correspond to 30 regions for geo_level_1. Are they Districts/Wards/Municipalities? There are only 11 districts in the original dataset, so I’m guessing geo_level_1 has to be something else?