About the scoring of submission

Hi,

I found a difference between the score shown for my latest submission and the score that I generated separately. Basically, in my codes, after I generated the submission.csv, I also ran the score function using metric.py and output the score to the terminal (and hence shown in the log). Since the random_seed in metric.py is fixed, running metric.py should always generate the same score, especially for the HOC score. However, the final score shown in the submission page is different from what I saw from the log.

By the way, in metric.py, the columns selected for k-way marginal scoring are not selected at random. Instead, all permutations are considered, which is slightly different from the problem description.

I found a difference between the score shown for my latest submission and the score that I generated separately. Basically, in my codes, after I generated the submission.csv, I also ran the score function using metric.py and output the score to the terminal (and hence shown in the log). Since the random_seed in metric.py is fixed, running metric.py should always generate the same score, especially for the HOC score. However, the final score shown in the submission page is different from what I saw from the log.

The random seed actually used on the platform is different.

By the way, in metric.py, the columns selected for k-way marginal scoring are not selected at random. Instead, all permutations are considered, which is slightly different from the problem description.

Good point. By saying “random” as in Sprint 2, we just wanted to make clear we weren’t guaranteeing any particular permutation of marginals. That said, you are correct that we happen to be using all of them.