A Q&A event was held on September 6th, 2023 from 1:30 - 2:30 PM ET via Zoom. Thank you to the solvers who attended the Q&A event and submitted great questions, and thank you to our partners at the CDC who shared answers and insights with us!
The event was recorded and is shared with all solvers on the data downloads page for the challenge. Below, the questions and paraphrased answers are posted, with relevant time stamps to the video noted.
What are the topic areas other than older adult falls where you see potential of machine learning methods to understand and prevent injuries? 5:10
There are many ways that machine learning methods are being applied to injury prevention. In the 3-4 years that this team has been in existence we’ve applied machine learning to a variety of topics. For example, we’ve come up with a machine learning pipeline to better understand real-time suicide trends, we’ve used NLP methods to understand alcohol circumstances from narratives and then to understand trends among alcohol-involved falls, traumatic brain injuries, and motor vehicle injuries as well. Most of our work focuses on supervised ML, but we do unsupervised ML as well, like topic modeling to extract information about the circumstances of suicides from free text data.
The video contains a more detailed answer.
What are some of the best practices for developing and deploying machine learning models for public health applications? 9:00
Our best practices for working with and analyzing data in general mostly focus on the ethical use and storage of data, and preserving the integrity and quality of the data. Many of the datasets we work with have very strict data use agreements. For deploying machine learning models, standards are new and evolving.
The video contains a more detailed answer.
Does the CDC collaborate on public health with other nations? Do any of these international collaborations use or build off of findings from machine learning and data science? 11:40
Note: We received a more specific question. The question that was asked was: “How is CDC implementing or introducing Machine learning Models for Public Health in Africa where the implementation for ML is at a lower rate”; and this was reflected in the answers given.
The CDC is a very collaborative agency. We collaborate at the state and local level, and with international health administrations across the world, including via field staff deployed across the world and in lower-income environments. We are focused on increasing data science and machine learning capacity with these nations in particular. We want to democratize and share the work we do, given that health needs are often common across nations. The next step is making sure that we are applying and sharing our data science practices once they are more established within the CDC.
The video contains a more detailed answer.
Can we include a video in our submission? 13:12
No, details on the submission format / requirements are here: https://www.drivendata.org/competitions/217/cdc-fall-narratives/page/763/#submission-format
Will midpoint submissions be rewarded at the end of the challenge or prior? 13:15
All prizes will be distributed at the end of the challenge, but selected finalists will be announced sooner. All finalists will need to have eligibility verified before payment is disbursed.
By when can we expect the feedback for the midpoint submission? 14:12
Group feedback will be shared with all participants on the forum the week of September 11 or sooner.
What payment options will be used. 14:52
We’ll work with winners to determine a payment option that makes sense.
Do you require that the notebook has a specific run time? For instance, GPU Notebook <= 9 hours run-time 15:12
No. While there’s no limit on runtime, we encourage you to only use the amount of compute that is needed. For example, if you’re running a model that requires multiple GPUs, there should be a clear reason why a single GPU model is not sufficient
How is the severity of fall defined? 15:27
There is no one correct way to operationalize severity, and part of the work of this challenge is finding effective methods of engineering features like those related to severity.
Related tip: In using disposition, the CDC team combines transfers to other hospitals with hospitalizations (given the low frequency of transfers).
**Regarding provided OpenAI embeddings: just to confirm - Narratives have been lowercased before being passed through the model and nothing else was modified? 17:00 **
This is correct. For the OpenAI embeddings on the data download page, the only preprocessing done was lowercasing.
Can you provide the time of day for each case? 19:23
No, that information is not available.
Any insights on how narratives were coded? Manual process? Were they done by X different researchers? (correct that they were not done by hospital staff?) 20:06
Narratives were coded by people (called abstractors) who review the medical record and pull out information relevant to the NEISS dataset, following the NEISS coding manual (linked in the problem description).
Related tip: Typically there is one abstractor per hospital, and artifacts from the abstractor’s linguistic habits may show up in clustering. (Neither hospital nor abstractor keys are included in the challenge data nor the public NEISS data.)
The video contains a more detailed answer.
Follow-up: what might explain missing white spaces in the narratives? e.g. “FELL DOWN TWO STEPS THIS MORNINGFRACTURE WRIST” 23:32
Any missing white spaces (i.e., places where there should be a space, but isn’t) are likely due to human error and not a systematic procedure like transcription or preprocessing.
What types of models do you want? Prediction or only data analysis? [There was also a suggestion in the chat that UI was required.]
The focus of this challenge is machine learning models. These can be supervised or unsupervised but generally will fall in the domain of NLP. We encourage visualizations in your submitted Jupiter notebook, but these should function as explanatory rather than as a built-out UI.
What strategies are currently pursued for injury prevention? 26:04
We’ll provide an answer about strategies to prevent older adult falls specifically. You can learn more here: https://www.cdc.gov/falls/index.html.
The core injury prevention strategies for older adult falls is prevention through improving strength and balance, home modifications (e.g., grab bars in bathrooms), medication management, and treatment for vision issues. This organization takes a clinical approach where people are assessed or assess themselves for risk and then see relevant providers like physical therapists, occupational therapists, pharmacists, or ophthalmologists. Other organizations have other approaches, like working with communities.
The video contains a more detailed answer.
In terms of future outlook: how might contestants’ solutions be used in future by CDC? 28:40
We are looking for creative and effective strategies to extract circumstance information from medically abstracted narratives. These strategies may be relevant to other datasets and other injuries like motor vehicle crashes and overdoses. We are also interested in the substantive findings about older adult falls, which can potentially inform later prevention efforts.
The video contains a more detailed answer.
Can we publish our model and methodology if we will not win the competition? 32:08
The relevant rule to be aware of is that solvers are not to share their solutions with people outside of their team during the challenge. After the challenge, you are free to do what you want with your work given that the data is all open, so feel free to share your models and methodology at the end of the competition! We put the code for the winning solutions in an open source competition-winners repo on github.
You can find all the winning code from previous DrivenData competitions here: https://github.com/drivendataorg/competition-winners.
Can we submit a data dashboard? 33:39
You can submit something like a dashboard if its in a notebook and otherwise adheres to the format requirements (see here: https://www.drivendata.org/competitions/217/cdc-fall-narratives/page/763/#submission-format)
For LLM based approach, Calls to OpenAI’s ChatGPT are expensive in terms of time due to request/response and rate limit imposed by OpenAI. Developing a locally deployable model like Falcon-7B is also costly timewise in low-end GPU, specifically given simple but numerous single-line narratives. What kind of accuracy vs. speed tradeoff are you looking for? Accuracy without requiring too much consideration for speed, in which case is using a small random subset for proof of concept a good idea? or going through all the narratives from the primary and secondary file at a reduced accuracy a better idea? (Not asked in session)
The evaluation criteria for this challenge focus on novelty, communication, rigor, and insight. Ultimately, the appropriate trade-off for your specific solution is a judgment call. Sufficient narrative examples should be used to demonstrate that the methodology is broadly applicable (rigor) and generates useful findings (insight). We do appreciate that there are costs to using OpenAI’s ChatGPT. The starter code for a Falcon-7B model on the community code page and the precomputed OpenAI embeddings on the data download page are intended as levers to increase the accessibility of LLMs.