I have several questions about the evaluation phase:
Can we decide not to submit the prediction in a particular week? Or should we submit the prediction for each week in order to be eligible for the prize?
How will the model verification be done? Will the host rerun the code for all submission points after the evaluation phase ends? What if there is a discrepancy between the prediction submitted by the participants due to input data changes? Is it possible that the data from HRRR/MODIS updated?
Is it possible that ground truth from past dates is updated/revised/added/removed later due to measurement errors or anything?
Predictions will be evaluated against ground truth labels as they become available, and your position on the public leaderboard will be updated each week. It is reasonable that you may need to implement minor fixes to your data processing pipeline to resolve any unforeseen changes to the input data (e.g., format, type, or API updates) during this time period
- What kind of data processing pipeline modification is allowed? Can we update the pipeline on handling missing data?
I am afraid of data availability and any problems that might happen in the future. If the data is missing, the model will predict incorrectly. My model depends on HRRR and more than 50% of the data are missing for the latest submission because the data source from Azure is not accessible. I did not use AWS source because it is not permitted previously.