Scoring Metric Question

The metric shown under ‘submissions’ is AggLogLoss. From the looks of it, lower is better. In the benchmark post (http://drivendata.co/blog/ai-for-earth-wildlife-detection-benchmark/), a score of 0.05 is given, with the comment ‘Awesome!’.

However, looking at the leaderboard I see the benchmark is ~0.77, and lower scores are below that. Has the metric changed? Where did the 0.05 from the benchmark post go?

1 Like

We have updated those items, thanks!

2 Likes