Rules clarification: external data(checkpoints?) reproducibility? manual curation/scoring?

Hi there! been reading through the rules, some doubts still.

-Think external data isnt allowed, but what about model checkpoints? Arent they also external data? If a team grabs a external trained model checkpoint and use that? And if they train that one externally with similar data? Not easy to reason about.
-Can manual curation of provided data exist? ex:fixing labels, manual classification,etc. So it becomes kind of external data also, produced by team during competition.
-Would like to confirm that for winning/prizes submission must be exactly replicated through submitted code execution, and if so, any time/resource constraints? (gpus,etc)
-Related to previous one, assuming manual labelling/adjusting of submission predictions isnt allowed, correct?

much thanks, good luck to everyone!

Thank you for your questions. All results must be reproducible and manual data labelling is not allowed. There are no set constraints on time or resources.

You are welcome to use pre-trained computer vision models in your solution provided they were available freely and openly in that form at the start of the competition.

Good luck!