Loading…

Loading grant details…

Active STANDARD GRANT National Science Foundation (US)

Collaborative Research: SHF: Small: Toward Automated Software Testing on Augmented Reality Apps

$3.24M USD

Funder National Science Foundation (US)
Recipient Organization University of Texas At San Antonio
Country United States
Start Date Feb 01, 2025
End Date Jan 31, 2028
Duration 1,094 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2418093
Grant Description

Augmented Reality (AR) is an emerging technology that overlays digital content onto a user's view of the real world in real time, creating interactive and immersive experiences. AR applications are expanding across various industries, including smart manufacturing, healthcare, navigation, education, and entertainment. Since users may rely on AR applications to directly understand and interact with the physical world, failures and errors in these systems can lead to severe consequences, including safety risks.

For instance, a flawed AR-based navigation application could cause accidents or damage the surrounding physical environment. Such real-world risks underscore the critical need for testing and quality assurance practices in AR application development. Despite the demand for high-quality AR applications, their testing support remains in its early stages.

The challenge of testing AR applications stems from the difficulty of handling real-world inputs and understanding their outputs blended with real-world scenes. Since real-world test environments are costly to build and difficult to control, alternative environments such as videos and Virtual Reality (VR) test scenes are adopted in practice. This project aims to develop innovative techniques to automate the testing of AR applications for higher efficiency and comprehensiveness and investigate the bug-detection effectiveness of VR test scenes.

The project includes plans to engage with students from underrepresented groups in computing and to enrich the software engineering curriculum.

Specifically, this project will develop an infrastructure that allows existing automatic Graphics User Interface (GUI) testing techniques to be applied to AR apps. The infrastructure will (1) automate the test scene construction by loading playback videos and configuring them at runtime, (2) automatically identify interactive areas on the screen by excluding non-interactive objects using dynamic filtering, and (3) automate GUI event triggering by inferring possible interactions of interactive areas through analysis of their event-handling functions.

The project will also develop techniques to automate test oracle in AR application testing. The techniques will check inconsistencies between AR rendering and code execution, and predict the correctness of virtual object placement using models trained with labeled screenshots, video frames, and application logs. Additionally, a study will be performed to assess whether VR-based test scenes can accurately simulate real-world scenes and reveal bugs in AR apps.

Pairs of real-world scenes and VR scenes will be constructed and test executions of AR apps on them will be compared based on various metrics such as code coverage, mutation scores, and user-perceived rendering difference. The project will further study the automatic revision of VR testing scenes with the guidance of code coverage.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

University of Texas At San Antonio

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant