After reading problem description and evaluation metric section, it’s still unclear how IoU is calculated. In particular:
IoU score can be calculated per each 512x512 tile independently and averaged across all test images.
IoU score can be calculated per each flooding event and averaged afterwards.
In case of option 1, there are important to clarify edge cases, when target mask has no positive targets. What would IoU score for that tile would be, if model predicts no positive pixels and if it predicts some positive? I assume it’s 1.0 and 0.0 accordingly. But would be great if you can clarify this.
IoU is calculated at the pixel-level. It is defined as the sum of the intersection of water pixels divided by the sum of the union of water pixels across all images. Below is some pseudocode to demonstrate this calculation. We will add this to the competition problem description as a reference.
intersection = 0
union = 0
for pred, actual in file_pairs:
mask = actual.ne(255) # get valid pixels
actual = actual.masked_select(mask)
pred = pred.masked_select(mask)
intersection += np.logical_and(actual, pred).sum()
union += np.logical_or(actual, pred).sum()
iou = intersection / union