Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Purdue University |
| Country | United States |
| Start Date | Aug 01, 2024 |
| End Date | Jul 31, 2027 |
| Duration | 1,094 days |
| Number of Grantees | 3 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2417510 |
Currently, researchers generate large amounts of data that capture many features of physical plants and bring that data into computer systems in order to understand and reveal complex biological processes and interactions. However, extracting meaningful information from this data requires advanced technical skills such as algorithm development, programming, and statistics.
The VR-Bio-Talk project will develop a life-like virtual reality (VR) visual analytics system, which will be controlled by voice. The user will be immersed in a field of scanned plants and will be able to interact with it verbally. For example, the command “show me all plants older than four weeks and their average leaf area.” will display only the correct plants and their leaf area as a label and a graph above them showing how the value is changing over time.
At its core, this project will develop novel algorithms for artificial intelligence (AI)-based interaction and advanced processing, reconstruction, and visualization of large plant datasets. The overall aim is to bridge the domain gap of current data analytics systems. The anticipated impact will be support for the development of a data-enabled biology workforce capable of advancing the understanding of plant biology and contributing to innovations by deriving insights from data using novel systems and algorithms for interaction with large phenotypic data.
Special attention will be given to including potentially disadvantaged users through built-in robustness to accents and support of learners with limited English proficiency and through VR data interaction designed to be accessible to users with limited motor skills. The novel AI-based voice-controlled VR interaction and visualization algorithms will have a broad impact that extends beyond the life sciences.
This project has three main aims. (1) Development of novel AI-based algorithms for the reconstruction of the vast, rich, but often underused data from phenotyping facilities into plant digital twins that respond to the environment by providing highly detailed 3D geometry and light interaction. In particular, the project will use the rich data acquired by the University of Arizona field scanner.
The plants will be rendered with high visual plausibility and photorealism. They will also be rendered in more salient false colors as needed for analysis and AI training. (2) The second aim is the development of a voice-controlled VR user interface that interprets complex compound commands. The interface will be connected to an AI-based voice recognition system and voice-to-text encoder, which will generate code and executable commands.
The control will be tuned to respond to a wide variety of accents and commands. (3) The third aim will focus on the deployment and evaluation of a set of carefully designed experiments with participants ranging from novices to experts. The users will use intuitive, natural interaction via dialogue with an AI-enabled data analytics system.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Purdue University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant