Problem clarification

I have few questions:

  1. How many sites should be in submission? We have 18130 samples in grid_cells.geojson, 10878 in train labels and 9066 in sample submission.

  2. How far ahead should we forecast every week? 1 week, 2 weeks, all weeks till the 1st of July.

  3. How you will evaluate on a weekly basis. You will evaluate on your servers(in this case we need hardware specs, evaluation timeout and so on. It’s really very important) or we will evaluate it by ourselves and submit fresh submission every week(I have no idea how you would control “no changes” and “used sources” parts from rules in this case)

  4. Could you please provide more details about the evaluation process. Like: Jan 11 6:00-7:00 UTC we will execute all submissions and every submission should output 18130*2 values for dates Jan 18 and Jan 25. Evaluation time is also important, as it may have influence on data availability for some sources, and this may have influence on model and used sources.

  5. What will you do if submission doesn’t produce output for some source and gt for this source would be null as well? What will you do if some submission will work fine for most execution dates, and fall without valid output once(or twice)?

  6. I’ve checked cell_id = 01be2cc7-ef77-4e4d-80ed-c4f8139162c3 and it’s not 1 * 1km, it’s 0.8 * 0.8km. Is it fine?

  7. Only 218 rows in train_labels have at least 10% non-null values. The other one(10.5k or 18k) may have few zeros, or nothing, or 2 measures 0 and 110. These random values may have a random and significant impact on the final score during testing. For example, 0dabd62e-e70a-4d05-8e2c-22aedebc94ab have known measures for 3 dates 2018-03-31=4.8, 2018-05-24=0.1 and 2019-04-07=110.1 and this last 110 may be more important than “fair” RMSE for main 218 cells with well known data. How will you deal with this?

1 Like

Hi @rekcahd, welcome to the competition and thanks for the questions.

For the Development Stage, you should submit SWE estimates for every grid cell and date contained in submission_format.csv. You will only be evaluated on a subset of these values. See this thread regarding your last question.

For the Model Evaluation Stage you will submit predictions each week (e.g., a single date column from the submission format at a time). You can only use data from the list of approved data sources up through the date of prediction. More information will be made available on data access at the beginning of this stage. As noted, you will also be required to submit frozen model weights, which must be used to produce your weekly predictions during the real-time evaluation period.

Regarding grid cell size, EPSG:4326 uses degrees as units. If you re-project to EPSG:3857 which uses meters as units, you will see 1km x km grid cells.

Good luck!

Hello @tglazer,

If I understand correctly the development stage and evaluation stage are very different:

  • During the development stage, predictions should be submitted for the whole period 2020-01 to 2021-07. This stage is about long term prediction of the SWE.
  • During the evaluation stage, prediction should be submitted only for the current week (“single date column from the submission format at a time”). This stage is about real-time prediction of SWE.

So the end goal of the competition is to have a real-time estimate of SWE at a high-resolution based on data which does not come directly from SNOTEL, CDEC or Airborne Snow Observatory (ASO) data.

I am not totally sure to understand why we can’t use data from the stations from the previous days as it may help to correct the errors made by the models on the SWE estimate that will accumulate over time.

@simon.jegou You are permitted to use SNOTEL and CDEC data from the set of stations contained in ground_measures_metadata.csv shared in the train features and test features files. Your solution cannot use any other historical ground measure data because your solution should be able to estimate SWE in locations where there is no historical ground station or ASO data available.

@tglazer

Hi,

During the competition, will we get to use up to date ground measures data?

Will the dataset (or equivalent) ground_measures_test_features.csv be updated with new data every week during the competition?

Thank you.

@timothee.henry Yes, we will provide updated ground measure features each week throughout the evaluation phase of the competition. Data access instructions will be available on January 11 (beginning of Submission Testing Stage 2a).

2 Likes

Great. Thanks for the feedback. @tglazer

Hi tglaze
when you say : “Yes, we will provide updated ground measure features each week throughout the evaluation phase of the competition.”
By evaluation phase, do you mean the period “11 Jan - 15 Feb” or “11 Jan - 1 Jul”

Thanks

@mchahhou Updated ground measure features will be made available weekly from from January 11 - July 1, 2021.