Could you confirm a few things that I infer from the new section: Forecast, Overall, and Explainability Prize Preview.
The hindcast report is only required to be considered for the hindcast prizes (this seems clear) and not reviewed during evaluation for the overall prizes.
The Jan 11 forecast code submissions and March 15 report submissions can include changes to our models that we make after our final hindcast submission.
The 30% of the evaluation criteria assigned to “Forecast Skill (Hindcast)” is not an evaluation of the December submitted hindcast code but rather a new submission that will include the mentioned cross-validation. And we are not required to use the same model we submitted in December.
Evaluation of our final model report and explainability track report requires that we submitted hindcast code, but doesn’t require us to discuss it in the report.
Basically, I’m out of time and submitted something to hindcast so I can remain in the running for the forecast stage. But if I’m beholden to what I submitted, I’m in trouble. It’s hard to write a report that says I skipped 80% of the potential data input because I ran out of time! But if I have until Jan 11 to modify my model and then submit a report based on a new model, it might be worth my time to continue. Any clarification on these points would be greatly appreciated. Thanks!
The hindcast report is only required to be considered for the hindcast prizes (this seems clear) and not reviewed during evaluation for the overall prizes.
This is correct. The final model report will be reviewed for the overall prizes instead of the hindcast model report. The final model report is expected to be an expanded version of the hindcast model report.
The Jan 11 forecast code submissions and March 15 report submissions can include changes to our models that we make after our final hindcast submission.
Yes, you will be allowed to make updates to your model in between the deadlines of the different stages. At the very least, additional training data will be released (the years that are in the Hindcast Stage holdout test set) that we expect you to use for training for the Forecast Stage and will be part of the cross-validation.
The 30% of the evaluation criteria assigned to “Forecast Skill (Hindcast)” is not an evaluation of the December submitted hindcast code but rather a new submission that will include the mentioned cross-validation. And we are not required to use the same model we submitted in December.
Correct. The cross-validation results will supersede the Hindcast Stage performance for the overall prize evaluation.
Evaluation of our final model report and explainability track report requires that we submitted hindcast code, but doesn’t require us to discuss it in the report.
In order for the judges to understand your methodology, the final report requirements may ask you to summarize the changes that you’ve made between the different stages in the challenge. However, as noted, the performance in the Hindcast Stage will not be considered.
I believe the dates for the Hindcast code submission deadline (Dec 21st), the Hindcast model report deadline (January 26th), and the Forecast code submission deadline overlap in ways that I assume will have me including information in my Hindcast report that I didn’t actually get to until after Dec 21st (e.g. I intend to continue experimenting with allowed data sources). Is there anything I need to be aware of in that respect, or can I safely include all of the data I have on hand, at the time of writing the report, without worrying about what date I discovered it on?
Hi @jayqi, does it mean that we will have another leaderboard for cross-validation results of 20 years LOOCV? Or is it just based on the report and code submission?
@mmiron For the Hindcast Model Report, please only include content that matches your code submission for the Hindcast Stage. For any updates made to your model later for the Forecast Stage or the Overall evaluation submissions, those should be reflected in the Final Model Report that is due later.
Hi @jayqi, does it mean that we will have another leaderboard for cross-validation results of 20 years LOOCV? Or is it just based on the report and code submission?
@rasyidstat Submission requirements and instructions for the LOOCV will be posted at a later date. The way that will be submitted will be separate from the Hindcast Stage evaluation that is happening right now.