Prediction csv file output path

Hi. Thank you for launching this interesting competition and for the detailed competition description. Can you please confirm the correct output path where the prediction csv file should be written. The competition info shows three output paths:

  1. The “Predictions Format” paragraph has this comment (plural file name):
    Your main.py script should produce a predictions file submissions/submissions.csv .

2.The examples in the github repo show this path (singular file name):
OUTPUT_FILE = ROOT_DIRECTORY / “submission” / “submission.csv”

  1. The code sample in the " What you submit" paragraph shows this path:
    OUTPUT_FILE = ROOT_DIRECTORY / “submission.csv”

Thank you.

Good catch, @vbookshelf!

The correct output path for the predictions file is:
ROOT_DIRECTORY / “submission” / “submission.csv”

The competition info on the website has been fixed. Thanks for pointing this out!

Thank you @mike-dd for responding quickly. If you could share a link to the actual submission.zip file that was uploaded for the Benchmark: Random Guessing solution it would help relative newcomers like me get a clearer understanding of what the submission folder structure needs to be. So far all my submissions have failed.

Hi @vbookshelf,

You’re going to want to have the main.py file in the root of the zip archive, not a subdirectory like /submission/main.py.

It might help to try following the example on the benchmark blog post. In this case the main.py file is at the top level of your local benchmark_src directory. Note that when we run make pack-benchmark that command first cds into benchmark_src and then zips the contents to ../submission/submission.zip:

cd benchmark_src; zip -r ../submission/submission.zip ./*

It’s a similar pattern with make pack-quickstart. Both of these make commands can be found in the Makefile.

I hope this helps.

@vbookshelf – This thread from a previous competition may be useful. I think this describes your situation as well:

Hi @mike-dd,

The info you provided solved my problem. Thank you very much for your help.

Just in case others have the same issue, these are the steps that helped me make a quick submission.

1- Clone the repo:
$ git clone https://github.com/drivendataorg/boem-belugas-runtime.git

2- You will now have a folder called boem-belugas-runtime. Inside this folder there’s a folder called benchmark_src. Place you main.py file inside this folder.

3- From the command line cd into the boem-belugas-runtime folder:
$ cd boem-belugas-runtime

4- Then run this code to create the submission.zip file:
$ cd benchmark_src; zip -r …/submission/submission.zip ./*

5- The submission.zip file will be created in the folder called: submission

6- Submit the submission.zip file to the competition for scoring.

7- Remember to remove the query_image_id from each scenario database or the submission will fail. Refer to the note in the “Scenarios, queries, and databases” paragraph that explains why this needs to be done.
df = database_df[database_df['database_image_id'] != query_image_id]