Thanks for the previous answers. I also have a couple of questions concerning the data and study design.
I understand that 10 people took part in the study, some in train, some in test and others in both. And that train and test data are both scripted.
Was the same script used for train and test?
Besides, from what I understood, a certain number of people followed the test script : each participant therefore generated a « long » test dataset (approximately 30 minutes if it is the same script as the train script). However, in the challenge, we only have multiple « small » test datasets (10-30 seconds).
Here are my questions to the organizers of the challenge:
For each participant, how did you convert the « long » test dataset (corresponding to one script) into many « small » test datasets ? To be more precise, for each participant, did you keep the entire « long » dataset (ie the entire script) and split it into many « small » test datasets ? Or, for each participant, did you keep only part of the « long » dataset (ie only part of the script) before splitting it into many small test datasets ?
- In both cases, how did you split a long dataset (may it be the entire script or part of it) into multiple small datasets ? Did you divide each long dataset into N small datasets of fixed size ? Did you divide each long dataset into a random number of small datasets of random size ? …
- Is there an order in the 820 small test datasets ?
- Why don’t we have an ID for the people that took part in the test ?
Thank you very much for your answers !