Too many missing ground truths

After carefully designing the algorithm, I find there are many missing ground truths in the given file.
I manually select some pairs for example:

I suggest that you should invite some labelers to double-check the ground truths. And I think it prevents the AP of my algorithm very much! Thanks a lot!

1 Like

Hi wenhaowang. Great question!

You are correct in pointing out that some of these reference images are very similar to the query images, but the reference images are not technically the source images in these cases and wouldn’t meet the competition definition of being a match. For example, you can see variations in the facial expressions from most of these examples that indicate they are actually different images and not copies.

The core challenge presented in this competition is related to copy detection, which means that you may come across examples where images are very similar but not necessarily derived copies of one another. The organizers have attempted to limit such ambiguous examples though you should expect that some will remain, as described on page 8 of the competition paper.

I hope this is helpful. Good luck!

Thanks for your reply. But I do NOT agree with the variations in the facial expressions indicate they are different images rather than copies. It is because the paper states that the DeepFake is used, therefore, the facial expressions can be changed a lot. This is confusing even to the participants when some images with large variations in the facial expressions belong to the same pair while little variation indicates the different pairs.