Defining the conditions for "Discussion of Performance"

A required section of the model report is the discussion of performance.

For some of those conditions, I was wondering if you could provide definitions.

  • Location: snow climate zones. Is there a source that we can refer to to assign the sites to climate zones? Or could you provide a site-to-zone mapping?
  • climate conditions: Again, what definition of dry/normal/wet year should we work with? Alternatively, could you provide a year-to-climate mapping?
  • Streamflow volume: Are there definitions of low or high value for a stream, or can we just go with e.g. below and above average (per site)?

When the model reports use similar definitions, it would also make them more comparable.

1 Like

Hi @kurisu,

Thanks for your patience. We don’t have any standardized definitions for these conditions, and we ask that you make some reasonable analysis choices in the reports.

In our previous Snowcast Showdown competition, we had a similar section in the model reports about performance across conditions. You can see some report examples in the Snowcast Showdown winners repository.

Hi @jayqi,

I think it’s pretty apparent that my experience only tangentially intersects with hydrology and conventional statistical analysis. I’m concerned that I may not be fully aware of what the judges will specifically be looking for. To that end, can you provide any insights into precisely what it is that they consider the goal of this challenge, as it relates to the participants? I like to think that I write great software, and science is old hat to me, but if I’m coming at an analysis from a different perspective than what’s expected and/or desired, I’m very likely to miss the mark.

To be as specific as possible, what I’m asking is what the fundamental interest is here: are they looking to evaluate our solutions as a proof-of-concept implementation of theory, or is the interest more about the analysis of our results than our solutions themselves? I wish I had the opportunity to get feedback, in order to be sure I provide something of substance, but I don’t. My personal feeling is that I’m doing too much guess work with respect to what the focus of my report should be, and frankly I’m uneasy about that.

You’ll have to pardon my anxiety; though after an investment of this much time, my feeling is that it’s more or less to be expected. Still, any additional information would be useful, thank you.

Hi @mmiron,

The overall goal of the challenge is to produce accurate and useful water supply forecast models. One way that we are assessing the accuracy and robustness of your models is through the cross-validation scores. The reports are primarily intended to provide additional supporting information to convince judges about the merits of your models.

There is a small component (Clarity, 10%) that is based on the quality of presentation of the report itself.

Hope that helps! Let me know if you have further questions.