I submitted a h5 file to the descriptor track and a csv file to the matching track and their scores have large gap between them.
I am a bit confused since I expected these scores would be the same since the csv file was generated from the h5 file by almost the same as what the official evaluation script do.
In addition, it is stated that " These scores will then be evaluated in the same manner as they are for the Matching Track" in Performance Metric section.
Does anyone know anything about this?