Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of California-Berkeley |
| Country | United States |
| Start Date | Jan 01, 2025 |
| End Date | Jul 31, 2027 |
| Duration | 941 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2516270 |
Enormous health inequality persists in the United States. Even prior to the COVID-19 pandemic. In many areas of the country, people with a higher income live up to decade longer than those in the lowest income levels.
Additionally, the pandemic itself has hit low income and under-served populations especially hard. Biased medical decision-making contributes to this health inequality. For example, previous work has shown that one of widely used health risk prediction algorithms assesses African-American patients as less sick than equivalently sick White patients.
This research will make medical decision-making fairer by statistically analyzing the decisions made both by humans and by algorithms. The research will identify sources of bias (for example, when medical tests are given to patients with better access to healthcare rather than to patients most likely to have a disease), and propose solutions (for example, reallocating tests to patients who are predicted to have the highest disease risk).
This will not only make healthcare fairer; it can also make it more efficient, by allocating medical resources where they will do the most good. The project will also create a publicly available class on how to design fair algorithms, and conduct a large-scale study of how engineers can be trained to design fairer algorithms, to improve the preparedness of the engineering workforce.
Because important medical decisions are made both by humans and by algorithms, the research pursues three objectives: 1) detecting bias in human medical decision-making, focusing on three high-stakes medical settings: allocation of medical testing, healthcare quality assessment, and interpretation of medical images. Further, the project will also build algorithmic decision-aids to reduce human bias, by drawing clinicians’ attention to medically relevant features they may have overlooked.
Finally, the project targets making algorithmic decision-making more equitable, by examining the features it is appropriate to include in a medical algorithm. The research will be conducted in collaboration with clinicians to maximize translational benefit to patients. The methods developed, which draw on techniques in Bayesian inference and deep learning to provide interpretable models of how bias arises, are more generally applicable to decision-making across a host of high-stakes domains—including lending and hiring—and thus can impact a wide range of fields concerned with equity in decision-making, including law and economics.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of California-Berkeley
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant