Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of Massachusetts Boston |
| Country | United States |
| Start Date | Jan 01, 2025 |
| End Date | Dec 31, 2026 |
| Duration | 729 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2430699 |
This project is motivated by the way humans learn when collaborating. Collaboration between skilled human learners with diverse backgrounds is a primary way that new knowledge is generated when ground-truth information is not available. Collaborating learners should be skilled so that their contributions to the group are sound.
The requirement that the set of learners is diverse is premised on the idea that the individual contributions of a single learner may shine light on things not known by another. For example, in cutting-edge scientific research, this situation arises when there is no known solution to a challenging problem. If an individual researcher cannot make progress, the researcher will tend to exchange ideas and collaborate with other researchers (sometimes from different disciplines).
In this collaborative ensemble learning setting, individual learners inspire and gain knowledge from each other. We aim to do the same in this research for learning algorithms working together to solve problems. This project will directly promote undergraduate and graduate research and training; encourage participation from women and members of underrepresented groups; and impact Computer Science (CS) and non-CS curricula and courseware broadly and sustainably.
This project aims to emulate the benefits that a group of learners may get from collaboratively solving a problem together by learning from their combined output. The combination of pre-trained model outputs—a practice known as ensemble learning—has been well founded in methods like Bagging and Bayesian Model Averaging as a way to mitigate errors of component models.
However, compared to the dynamic, mutually inspiring, and individually beneficial human collaboration process, current ensemble learning methods such as majority voting are often static and used as a last-step “one-time deal” for a small performance boost. The main idea of this work is that given a set of diverse pre-trained classifiers, their ensemble combined output can be a form of pseudo-label that the individual classifiers can learn from.
These pseudo-labels can serve as a secondary optimization signal, especially when the primary ground-truth label signal is unavailable. The overall goal of this project is to use the diversity of knowledge present across a set of pre-trained models to compensate for the bias of any particular model and reduce average generalization error of all member models in the process.
Additionally, if our ensemble prediction is given by the sample average over member model outputs, then our approach reduces the variation of the learning method by a factor of the number of models used in the ensemble and reduces the overall generalization error in the bias-variance decomposition of the ensemble expected test error. This project will provide important insights to both human learning (that is, education) and machine learning (for approaches such as Mixtures of Experts) with respect to multiple learning agents/models, with the benefits of improving not only overall performance, but also individual performance of participating models.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of Massachusetts Boston
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant