Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Johns Hopkins University |
| Country | United States |
| Start Date | Jan 15, 2021 |
| End Date | Dec 31, 2023 |
| Duration | 1,080 days |
| Number of Grantees | 4 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2041221 |
Compositionality, the principle that the meaning of a complex expression is built from the meanings of its parts, is central to human language. This property enables us to produce and comprehend novel expressions that we have never encountered before, by composing the parts that we already know. For example, an English speaker who is told what the meaning of the sentence 'The blick saw the cat' is (let's say 'blick' is a black duck), can easily generalize their understanding to a novel sentence 'The cat saw the blick' without explicitly being told what it means.
The goal of this project is to take a step towards elucidating the mechanism underlying generalization facilitated by the principle of compositionality. This goal will be undertaken through a combination of human and machine learning studies. This project will encourage methodological transfer between human and machine learning research, as well as promoting collaboration between scholars in Linguistics and researchers working on language in industrial labs.
This research has potential implications for Artificial Intelligence (AI)—computational models with better generalization capacity can help address shortcomings of existing models, such as robustness and data efficiency.
Contemporary neural models—a family of models that is based on massive parallel computation inspired by biological neural circuits, and has greatly advanced the progress in AI—only achieve partial success in compositional generalization. In particular, prior work has shown that neural models struggle with generalizations that require novel composition of known structures (structural generalization).
An instance of a structural generalization is generalizing a modifier only seen in object position to subject position. For example, if a model can assign the correct meaning to 'the girl saw a cat on the mat', can it also assign correct meaning to 'the girl on the mat saw a cat'? Limited structural generalization in neural models motivates the two main research questions of this project.
First, what are the capabilities and limitations of structural generalization in human learners? Second, what can be learned from the inner workings of neural models that achieve partial compositional success, and what revisions to these models would facilitate more human-like generalization? The first question will be explored through an artificial language learning study with human subjects (native English speakers), and the second question, via a computational modeling study with neural networks.
This research will advance our understanding of the sufficient conditions for human-like compositional generalization to arise in neural networks. Furthermore, the human experiments will fill an important gap in the literature by testing structural generalization in a controlled experiment
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Johns Hopkins University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant