Loading…

Loading grant details…

Active STANDARD GRANT National Science Foundation (US)

Collaborative Research: PPoSS: LARGE: Cross-layer Coordination and Optimization for Scalable and Sparse Tensor Networks (CROSS)

$30.34M USD

Funder National Science Foundation (US)
Recipient Organization North Carolina State University
Country United States
Start Date Sep 15, 2023
End Date Aug 31, 2028
Duration 1,812 days
Number of Grantees 2
Roles Principal Investigator; Co-Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2316201
Grant Description

High-dimensional data computation and analytics are gaining importance in various domains, such as quantum chemistry/physics, quantum circuit simulation, social networks, healthcare, and machine/deep learning. Tensors, a representation of high-dimensional data, have become increasingly crucial. While extensive research has focused on tensor methods like decompositions and factorizations for low-dimensional data, there is a notable lack of development in tensor networks that cater to high-dimensional data (over ten dimensions) and can extract physically meaningful latent variables.

The challenges arise from their complicated mathematical nature, extremely high computational complexity, and domain-specific difficulties. This project aims to bridge this critical gap by devising efficient tensor networks, especially for sparse data, which are prevalent in many real-world applications. The impacts of the project encompass four aspects: 1) Improving data compression, computation, memory usage, and interpretability of tensor networks; 2) fostering enduring and collaborative partnerships among academia, national research labs, and industry with a shared focus on the aforementioned applications; and 3) broadening education avenues by designing relevant new courses, training undergraduate and graduate students, organizing workshops, and enhancing K-12 outreach.

This project delves into Cross-layer cooRdination and Optimization for Scalable and Sparse Tensor Networks (CROSS) designed for heterogeneous systems equipped with diverse accelerators like Graphics Processing Units (GPUs), Tensor Processing Units (TPUs) and Field Programmable Gate Arrays (FPGAs), and various memories such as dynamic and non-volatile random-access memories. This research aims to study sparsity within widely used tensor networks by incorporating constraints, regularization, dictionary, and domain knowledge.

In addition to sparsity challenges, sparse tensor networks also face problems such as dimensionality, exacerbated data randomness and irregular program and memory access behaviors. This research tackles these challenges from four dimensions: (1) memory heterogeneity-aware representations and data (re-)arrangement, (2) balanced sparse tensor contraction algorithms with smart page arrangement, (3) memoization and intelligent allocation to reduce computational cost, and (4) specialized accelerator architectures for sparse tensor networks.

The optimized sparse tensor networks represent a synergistic effort combining expertise from high-performance computing, algorithms, compilers, computer architecture and performance modeling. The proposed solutions are evaluated under diverse application scenarios and across a wide range of hardware environments to demonstrate their effectiveness and applicability in real-world settings.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

North Carolina State University

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant