Upload timeout / other ways to submit

My submission failed due to some network error, possibly because of my slow upload speed.

Could you please let me know:

  • if there is an upload time limit on the website;
  • if submissions are possible in a different way (via a CLI or API) instead of the web form?

How long was it running before it timed out? You need to have a solution that runs under 2 hours so if your internet speed is the problem then I am not sure the best solution. I had to use decent sized batches for my submission to work. It was my 22nd try and first success so good luck. I will try and keep watching the thread if you need any help. I have been trying since 6pm yesterday so I likely feel your frustration.

Thanks for your reply. I was referring only to the upload time, I wasn’t able to upload the solution at all, so no running time :slight_smile:. I remember that in previous competitions people asked to increase the upload timeout from 1 to 3 hours.

BTW, what size was the model that were able to finish in 2 hours?

Sorry you experienced this @dmitry_v ! Yes, the upload time limit was set to 1 hour. I’ve now upped it to 3 hours. Hopefully that works for you. Unfortunately, there is not another way like a CLI or API to upload your submission.

Thank you! Yes, It helps

Hello, did you encounter below issue during smoke test? I’ve tried using smaller hyperparameters to reduce the runtime but I’ve no clue what I could do. The logs are not very helpful either. If so, could you help?
Your submission timed out after running for too long on the execution platform. Reduce your runtime to the maximum allowed duration and try again.

@cszc could you help in anyway?

Smoke test submission should take less than 4 or 5 minutes to run because it only run on 9000 examples.

So, the actual submission run on over 205000 examples can finish within 2hours.

1 Like

@dzunglt24 is correct, smoke tests should take less than 5 minutes to run. If it cannot run in that time, your full submission is unlikely to complete in the allotted 2 hours.

1 Like

Thank you for the confirmation!

Could you also confirm if all the test samples have correct child_audiolabel_text mapping? You could ignore the noise/bg sounds here.

Yes, test samples have the correct mapping. there may be some background noise, but those are not transcribed/included in the label text.

I’m assuming noisy-WER is solely on noisy samples. Could you also clarify if the public-WER is computed on both noisy plus “traindata-like” samples or just “traindata-like” samples?

Yes, noisy-WER is evaluated solely on noisy samples. The public-WER may contain some samples with noise in it as well.