Feedback for the organizers

Now that the submission deadline is passed, I want to give some feedback and hints for the organizers. First of all, this challenge has been very interesting and inspiring, and developing the model system has been fun indeed. However, some details have been bothering me, and for the future weather/hydrology related competitions (I really hope they keep organizing them!) it would be nice if these could be taken into account.

  • The competition time was very short given the broadness of the topic and the number of available datasets. Innovations don’t come easily, they require hard work, and hard work requires time. This kind of modeling projects usually last for months or years in research institutes and for reason. Additionally, not everyone has the possibility to work very intensively on these challenges due to other duties. Personally, I become dad for the first time in the beginning of this month, which affected my ability to work with this a lot. The number of successful submissions in the competition was really low compared to the total number of sign ups, and I wonder whether giving more time to competitors to develop good models would have helped in that.
  • Allowing more computing resources would have helped a lot, too. In my opinion they were really modest, at least the CPU resources! I managed to run the code in my old crappy laptop flawlessly and easily within the time limits, but the memory limitation of the cloud platform crashed my runs.
  • Instead of the test set of fixed odd years, a much better alternative would have been to use the time series cross-validator (sklearn.model_selection.TimeSeriesSplit — scikit-learn 1.3.2 documentation) and prohibit using and fitting models to the future data. That would have been a simple and realistic approach to validate the models in an unbiased way, and easier to validate the code afterwards, too. That approach would have allowed using long (multi-month and multi-year) lookback windows, because there would not be a risk of future data leaking: “leaking” from the previous years can in fact be a positive thing, leading to increased model skill. For example, the streamflow of the previous year can be an indicator of the state of the ground water supplies for the next year, and exploiting this kind of relations would perhaps be important, at least for some of the sites.
  • Using the time series cross-validator would have streamlined the rules, which were unnecessarily complicated, and changed over the competition, leading to confusion and fear of violating rules accidentally, and exclusion of some potentially useful and legal approaches “just to be sure to not break rules”.
  • Publishing the technical details of the submission in the late phase of the competition lead to massive reorganization of my code, which was frustrating and took a lot of time. If I had had that information earlier, I would have designed the code differently in the first place. It affected the failing of the solution, too, as I apparently could not modify the code enough.
  • It is a pity I have to trash my work completely now, as I am not allowed to participate in the remaining challenges. For example, in the “Streamflow Forecast Rodeo” competition a few years back (which I participated under the acronym salmiaki) the competitors were allowed to enroll at any stage, and additionally, they were allowed to change their models during the whole year. Very merciful! And it allowed me to develop a really good model over months. :+1:
2 Likes

Congratulations on being a father, Salmiaki! I’m tickled to see the name – you’re tough competition, let me tell you!

I don’t have much to say other than putting in a second vote for your last point there. I wouldn’t have been able to compete in that one if they hadn’t allowed late sign ups; in fact, for better or worse, I usually seem to be late to the party for these things (not this time, fortunately).