Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of Wisconsin-Madison |
| Country | United States |
| Start Date | Jan 15, 2025 |
| End Date | Dec 31, 2029 |
| Duration | 1,811 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2440563 |
Stochastic optimization is a fundamental research discipline and a workhorse of learning algorithms. It addresses problems that are stochastic—or random—in nature, such as those arising in the training of machine learning models. The existing theory underlying most learning and optimization algorithms often relies on the simplifying assumption that the data examples observed during training are representative of the data on which the model will be tested or deployed.
However, this assumption rarely holds in practice. For example, a facial recognition model trained on broad U.S. data may exhibit varying performance across states with differing demographics, raising concerns about both accuracy and fairness. Similarly, in e-commerce, customer behavior can shift dynamically in response to pricing strategies.
This research aims to develop robust algorithms capable of handling these dynamic and uncertain data scenarios. The work will advance optimization techniques to address fundamental supervised learning tasks, yielding algorithms with provable error guarantees that are both computationally and data efficient. These advances will enhance our understanding of learning in dynamic and uncertain data environments, which are central to modern machine learning.
Broader impacts include fostering cross-disciplinary collaborations, mentoring students, and organizing a workshop to engage diverse early-career researchers, thereby supporting education, diversity, and innovation in science.
The project will develop new optimization-inspired algorithms to address learning under two models of distributional shifts, forming two main research thrusts. The first thrust focuses on scenarios where the training and testing data distributions differ, studied through the distributionally robust optimization framework. The goal is to train learning models that perform well under worst-case test scenarios within a predefined ambiguity set.
By leveraging the structured nature of fundamental tasks in regression and classification, this research aims to address limitations of existing approaches, which often rely on overly general assumptions and produce overly pessimistic results. The second thrust addresses situations where data distributions shift in response to the trained model, such as in performative prediction settings.
The key challenge is to achieve stability under these shifts, which translates into solving stochastic fixed-point equations and nonconvex optimization problems. The focus is on advancing algorithmic techniques to tackle structured learning problems, providing guarantees for tasks such as learning single-index models. The proposed research is expected to contribute novel algorithmic techniques in stochastic optimization and learning theory, particularly in areas such as min-max optimization, stochastic fixed-point problems, and learning under label noise.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of Wisconsin-Madison
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant