The difference in scoring between the two tracks

I submitted a h5 file to the descriptor track and a csv file to the matching track and their scores have large gap between them.

I am a bit confused since I expected these scores would be the same since the csv file was generated from the h5 file by almost the same as what the official evaluation script do.
In addition, it is stated that " These scores will then be evaluated in the same manner as they are for the Matching Track" in Performance Metric section.

Does anyone know anything about this?

Solved this.
The evaluation script filters query samples without ground truth during making submission csv file. Thus, my submission has only Q00000 ~ Q25000 rows, and of course it degrades score.

1 Like