Kelp Wanted Competition - 2nd place solution

Hello everybody,

It took a while to put together but I finally finished cleaning up and documenting the code as well as writing the solution’s technical report. It’s more or less finished - expect a few minor changes here and there.

  • You can find the code on my GitHub: here
  • Project documentation is hosted on GitHub pages: kelp-wanted-competition-docs
  • Technical report is on the same site
  • A dev-log detailing my progress on the competition is available there too

There is a TL;DR section for those that don’t have the time to read the full report :smiley:

I look forward to reading about other people’s solutions.

Best,
Michal

2 Likes

Thanks so much for the detailed report!

Any reason for using bf16-mixed for inference when 16-mixed was used for training?

Tesla T4 GPUs that I used for most of my training runs do not support brain-float 16 training. If you try to run a training job with this setting you will get RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16. To fully utilize this form of mixed-precision training a GPU with Ampere or later architecture is needed. Since I have RTX3090 in my local PC (that I used to submit all solutions) the switch from 16-mixed to bf16-mixed gave me consistent, although very marginal bump in LB scores. That being said, when I tried to run inference with 32-true precision, it did not improve the LB scores.

the switch from 16-mixed to bf16-mixed gave me consistent, although very marginal bump in LB scores.

Thanks, very interesting.

Which details do you think were the most impactful, i.e. gave you the largest increase in score?

There is a bunch of things that worked for me. Some of the most impactful ones are listed in the table below:

Method Public LB score Score increase
Baseline UNet + ResNet-50 encoder 0.6569 N/A
AOI grouping with cosine similarity and more robust CV strategy 0.6648 0.0079
Dice loss instead of Cross Entropy 0.6824 0.0176
Weighted sampler 0.6940 0.0116
Appending 17 extra spectral indices + using water mask to mask land in spectral indices 0.7083 0.0143
Using EfficientNet-B5 instead of ResNet-50 0.7101 0.0018
Decision threshold optimization + bf16-mixed inference 0.7132 0.0031
10-fold model ensemble 0.7169 0.0037
Training for 50 epochs + weighted average + soft labels 0.7210 0.0041
1 Like