The Leaderboard score currently displayed in the competition has a 4 digit floating value precision. As the LB scores are really close, if the floating point precision could be improved to say 5 or 6, that might help few people to accurately quantify the score of their submissions.
Hey @binga,
In line with @bull’s response to a recent question about the leaderboard, it’s probably best not to put so much emphasis on your public leaderboard score (i.e. down to the < 0.0001 level).
Behind the scenes we keep plenty of precision in case of close calls. But broadly speaking, the best models will generalize to similarly generated data – so your best bet is probably skeptical modeling, cross validation, and all that good stuff rather than trying to ace the public leaderboard. Does that make sense?
Thanks for your interest and for the question.
Yup. I completely agree with that. Trust CV. Not LB.
When the difference comes down to the order of 1e-4, the higher precision could just help people to see where they stand! Never mind. Thanks!