Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Massachusetts Institute of Technology |
| Country | United States |
| Start Date | Oct 01, 2024 |
| End Date | Sep 30, 2026 |
| Duration | 729 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2341748 |
One common way to help people understand data is through visualizations. These visual representations of data can support analysis, lead to new insights, and communicate important ideas. However, visualizations are less useful for the many millions of people who are blind or have low-vision (BLV).
Researchers and designers have worked to develop sound- and touch-based versions of visualizations to better serve BLV people, but these efforts tend to be both limited to specific kinds of visualization and provide minimal control to BLV users. This project investigates methods and systems that BLV users can use to build richer, customized representations of data that span multiple modes (i.e., text, verbal and non-verbal audio, and tactile) simultaneously.
The goal is to allow users to say what they would like the representation to depict, letting the underlying system create the needed code to produce a working multisensory data representation. Further, these representations will be made interactive so users can rapidly explore multiple slices and representations of the data, increasing the chance they can learn from it.
Through this work, this project will empower BLV people to engage in data analysis in ways as rich and interactive as methods that sighted people enjoy today. To facilitate this impact, project outcomes will be made available as open-source software, and project personnel will lead educational and outreach efforts.
This project aims to develop abstractions akin to those found in visualization grammars such as ggplot2 or Vega-Lite but for multisensory data representations. Through a mix of qualitative methods including contextual inquiry and in-field ethnography, the research team will first study how expert BLV scientists and data analysts work with and communicate about data.
The goal will be to learn their mental models and existing approaches for constructing non-visual data representations, and how they discuss these representations with their sighted colleagues. The researchers will then host a series of co-design workshops with BLV participants to elicit expectations and populate the design space of multisensory data representations.
Results across both threads of work will then inform the design of computational abstractions that will be reified either through a textual language or through an interactive structured editor. Project contributions will be evaluated through summative and comparative user studies conducted with BLV participants.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Massachusetts Institute of Technology
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant