Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Columbia University |
| Country | United States |
| Start Date | May 15, 2021 |
| End Date | Apr 30, 2024 |
| Duration | 1,081 days |
| Number of Grantees | 5 |
| Roles | Co-Principal Investigator; Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2040971 |
Artificial Intelligence (AI) plays an increasingly prominent role in modern society because decisions that were once made by humans are now being delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals' incarceration, and the hiring of new employees, and it is not difficult to envision a future where AI will underpin most of the society's decision-making infrastructure.
Despite the high stakes entailed by this task, there is still almost no understanding of some basic properties of such systems, including issues of fairness and transparency. For instance, there is a proliferation of criteria and methods trying to account for unfairness in decision-making, but choosing a metric that the AI system must adhere to be deemed fair remains an elusive, almost daunting task.
Also, these metrics are almost invariably carried out in an arbitrary fashion, without much justification or rationale. In this project, we will develop the mathematical foundations for (1) assisting data scientists analyzing the existence and (possibly) the `magnitude' of unfairness in an already deployed decision-system and (2) guiding system's designers in the process of selecting a fairness criterion in their to-be-deployed system while ascertaining an established level of fairness and accuracy.
This proposal aims to make both foundational and methodological contributions towards the goal of causal fair decision-making. At a foundational level, we build on causality theory to elicit the principles necessary to formally understand the problem of fairness, which is intertwined with the true causal mechanisms underlying the data. In particular, we study various measures of fairness available in the literature and their detection and explanatory power relative to the unobserved causal mechanisms.
On the methodological side, we aim to bridge the gap between causal analysis and scalable machine learning methods through novel ideas for efficient estimation, prediction, and optimization under causal fairness measures. This includes weighted empirical risk minimization methods for estimating causal fairness measures from offline data, active learning and exploration techniques for hybrid (offline and online) learning, robust optimization methods to handle model misspecification, and reinforcement learning techniques for understanding long-term impact of fair/unfair policies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Columbia University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant