Questions about adversary model, formality of security proofs expected, performance metrics

One thing that’s not clear to me is the intended adversary model. It’s sensible of course to define privacy in terms of what information can’t be learned by anyone apart from the data provider, but what’s missing is some definition of what an adversary might be willing to do, or has the vantage point to do, in an attempt to learn details of that sensitive data. Should we think of the adversaries as being willing to maliciously break a protocol they participate in? Or is it better to think of the parties involved as having the best intentions, and not wanting to have information leak accidentally?

A second thing that’s not quite clear is how “formal” an argument of security for a solution should be. On one hand, I can imagine participants might argue informally - for example by constructing an argument about how anonymization of data effectively (but not mathematically) prevents re-identification. On the other hand, I can imagine other participants arguing rigorously about cryptographic certainty of security, and whether those arguments are based on “standard assumptions” or not. A red-team evaluation may never find vulnerabilities that exist even when arguments about security are incorrectf, making a red-team approach of only limited use. What kind of rigor is wanted in those arguments?

A third thing that seems not quite clear is expected efficiency. Some solutions I can envision would very likely be very inefficient - homomorphic encryption as an underlying framework for a poorly optimized application is one example. It seems to me that without some guidelines on expected efficiency, selecting Challenge winners will be difficult. Basing efficiency estimates on asymptotic analysis is often misleading, due to wide variation in the values of constants in asymptotic characterizations. What kind of performance metrics are being considered in evaluating entries?

Thanks, David for the excellent questions (and sorry for some delay in responding). Below are our quick response to your question. We plan to further clarify these Evaluation Methodology details (especially for Phase 2) that we will be producing before the start of Phase 2.

  1. Participants are open to choose the threat/adversary models and clearly articulate any assumptions (e.g., related to trust on any component). But it is important that appropriate justification is provided from the perspective of practical challenges as well as other considerations. Note that some threat models are generic (e.g., membership inference); but the privacy approaches proposed may influence that or can introduce additional issues. The claims to privacy guarantees/strengths should be properly articulated and justified.

  2. Evaluation will take into account the tradeoff between adversary model and efficiency - targeting a more challenging adversary model will likely result in a higher score in the privacy category; the difficulty in achieving better accuracy (beyond practically acceptable levels) will influence score on the efficiency/scalability issues. Besides, note that “innovation” score will also capture how the solution advances specific privacy technology (e.g., in MPC or HE based solutions as compared to their state-of-the-art).

  3. More formal justification (e.g., proof) is likely score higher on the Privacy category than informal arguments. Red teaming reports will be used for final winner selection – the rigor of red teams will also be evaluated by the judges (there is competition among the Red teams also). If Red team attacks do not find a bug in two separate but similar category solutions (e.g., MPC + DP based solutions), one with more formal proof of privacy guarantees is expected to score better than one with only informal justification.

  4. Solutions will be compared within bins of similar [privacy] technologies and related state-of-the-art solutions – for all criteria (privacy, efficiency, etc.); the participants will need to provide justification for these as well. Phase 2 will evaluate concrete efficiency (not just asymptotic). This will be hard to evaluate in Phase 1, but you’re more likely to be successful in Phase 2 if you target approaches that will have good concrete efficiency in the implementation. For Phase 1, you are welcome to also include initial efficiency/performance results based on the datasets and your early implementations.

They wanna limit cookies access to information to keep companies from tracking people. Whatever model works they can encrypt afterwards.