Loading…

Loading grant details…

Completed STANDARD GRANT National Science Foundation (US)

I-Corps: Determining occupant load and location through machine vision with on-device image processing

$500K USD

Funder National Science Foundation (US)
Recipient Organization Arizona State University
Country United States
Start Date Feb 01, 2021
End Date Jul 31, 2022
Duration 545 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2054807
Grant Description

The broader impact/commercial potential of this I-Corps project is the development of smart cameras with on-board processing. The proposed technology will be used as a part of building and smart city management systems. Building heating, ventilation and air conditioning (HVAC) accounts for 13% of all energy usage in the United States and nearly 40% of buildings' energy.

Accurate occupancy detection may reduce energy use in HVAC systems by as much as 30%. However, there are concerns regarding occupancy detection when assessing accurate detection and privacy. The proposed technology may solve these concerns by relying on the camera's vision to provide precise information and on-device analysis to ensure no image privacy data is transmitted.

The proposed technology also may be used to improve traffic light planning. Pedestrians congregating at an intersection may cause safety issues for vehicles and people. The proposed technology allows for the counting of people, cars, and bikes and the integration of this information.

Additional analysis may be performed by providing count data and if pedestrian counts are not changing, there may be a need to modify traffic patterns or alert first responders.

This I-Corps project is based on the development of embedded devices to run object-detection algorithms. Object detection using deep neural networks (DNNs) involves a large amount of computation, which impedes its implementation on resource/energy-limited, user-end devices. The reason for the success of DNNs is due to having knowledge over different domains of observed environments.

However, only a limited knowledge of the observed environment at inference time is required, which may be learned using a shallow neural network (SHNN). The TKD (Temporal Knowledge Distillation) is a system-level design that is proposed to improve the energy consumption of object detection on the user-end device. An SHNN is deployed on the user-end device to detect objects in the observing environment.

Also, a knowledge transfer mechanism is implemented to update the SHNN model using the DNN knowledge when there is a change in the object domain. Experiments demonstrate that the user-end device's energy consumption and the inference time can be improved by 78% and 71% compared with running the deep model on the user-end device.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Arizona State University

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant