Responsible AI Track dedicated thread

Hi all,

I’m Dave from GFDRR Labs and one of the challenge organizers. We’re excited to have this competition, especially with this Responsible AI track with $3K in prizes for your best ideas exploring the appropriate development and uses of ML for disaster risk management.

To get started, please review the Responsible AI page for a detailed overview of why and what this track is about with links to more resources. Given the novel nature of this challenge prompt and flexible submission format, we’re sure to have many questions and discussions.

Please use this dedicated thread to share your questions, useful resources, or other relevant thoughts. We will do our best to promptly answer any clarification questions.

Thanks and good luck!
Dave

2 Likes

Reposting announcement here:

I’m happy to provide private, neutral feedback for Responsible AI submissions-in-progress made by this coming Tuesday March 10th at midnight UTC . Limit one submission per participant.

In order to take advantage of this offer you must:

Feedback will be provided on a first-come-first-served basis, with everyone that submits on time getting a response by next Friday March 13th at midnight UTC.

Look forward to seeing your work-in-progress!

And if you have any general questions about the Responsible AI track, please post below and we’ll address them publicly as they come. Thanks!

REMINDER: if you would like to receive feedback about your Responsible AI work-in-progress by Friday which may help strengthen your submission, please submit a link to your work and send me a DM about it by end of today (Tuesday March 10) UTC time!

Remember that if you’re participating in the segmentation track, you MUST submit work on the Responsible AI track as well in order to be eligible for the $12K in total segmentation prizes (and contend for the $3K in prizes in the Responsible AI track!). You can also choose to only participate in the Responsible AI track.

We’re almost there! Please remember to submit your Responsible AI entry before the deadline!

For reference, your entry will be judged on the following criteria:

  • Thoughtfulness (40%):
    • Depth of inquiry goes further than a superficial level, i.e. into second order consequences
    • Synthesizes multiple, oft-competing ideas and principles
    • Acknowledges contradictions and tradeoffs
  • Relevance (20%):
    • Applies an ethical lens specifically to a disaster risk management use case
    • Data sources considered include OpenStreetMap building labels and overhead imagery from drones and satellites
  • Innovation (20%):
    • Goes beyond well-established methods or uses them in novel ways to tackle the question
    • Takeaways are insightful, thought-provoking, and actionable
  • Clarity (20%):
    • Clearly communicates the problem(s), approach, or issues explored in the submission
    • Understandable to the non-technical layperson

And here’s a non-exhaustive list of relevant projects and readings to help inspire you on how we responsibly create and apply AI for disaster risk management:

Look forward to seeing what you come up with!