Loading…

Loading grant details…

Active STANDARD GRANT National Science Foundation (US)

CRII: RI: RUI: Representations for multi-timescale scene dynamics in webcam video streams

$1.75M USD

Funder National Science Foundation (US)
Recipient Organization Western Washington University
Country United States
Start Date Jun 01, 2021
End Date May 31, 2026
Duration 1,825 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2105372
Grant Description

Thousands of webcams positioned all over the world are constantly streaming live HD views of a wide variety of scenes. Thanks to their longevity, these streams capture many visually interesting phenomena that occur not only in real time but over longer scales of time. Although computer vision algorithms have been developed for video and scene understanding, the scale - both in data volume and in time - of these webcam streams require new approaches.

This project will develop representations that facilitate techniques to automatically extract insights about short and long-term changes in fixed-view webcam scenes. These techniques can be applied to a variety of areas including security, surveillance, and a diverse array of monitoring use cases including climate science, ecology, and development. Undergraduate students at a teaching-oriented public Primarily Undergraduate Institution will gain valuable skills and experience by being directly involved in all aspects of the research.

Existing video understanding and analysis techniques focus almost exclusively on real-time dynamics, such as human actions, while little attention has been paid to longer-term phenomena. The objective of this research is to develop representations for years-long video streams that facilitate analysis and understanding of phenomena at a wide range of time scales.

To serve this purpose, the representations need to be compact, allow for full reconstruction of the input, and organize scene content by timescale. To achieve this, a dataset of video streams will be collected and scene-specific models will be trained to encode video frames into compact latent vectors. A novel regularization scheme will impose order in the latent space, arranging the representation of the scene's content by how quickly it varies.

The research will enable new ways to detect, monitor, and understand phenomena that occur over weeks, months, and years. Existing video understanding applications such as anomaly detection, action recognition, and video prediction will also benefit by operating in a compact and organized self-supervised latent space. Finally, the research will further progress towards understanding how to encode temporal redundancy in videos and yield insights about general video understanding by helping to determine what is possible with the absence of camera motion and an abundance of observations.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Western Washington University

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant