This competition seems to have an implicit requirement for a combination of low computation operations per inference and/or high FLOPS (which is not to say that the models developed for the competition are required to use floating point operations, but it is a conveniently known measure of relative processing speed) hardware and high internet download speeds.
This implicit requirement arises from the phase 2 limited time window of 48 hours in which a team must download each of 50,000 previously unknown images and query them against 1 million images that can’t be pre-compressed into smaller files.
I’ve been able to conclude that my existing hardware isn’t up to the task in that timeframe. But I would say that preparations/design for the phase 2 submittals is a major part of the competition.