Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of Utah |
| Country | United States |
| Start Date | Jul 01, 2022 |
| End Date | Jun 30, 2027 |
| Duration | 1,825 days |
| Number of Grantees | 5 |
| Roles | Co-Principal Investigator; Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2217154 |
Computations on tensors are fundamental to many large-scale parallel software applications in scientific computing and machine learning, and their efficient implementation has been crucial for the significant advances they have enabled. However, with the end of Moore’s Law, two critical challenges now threaten continued progress: (1) with transistors becoming a bounded resource, hardware customization is critical to sustaining improved performance and energy efficiency, requiring advances in algorithm-architecture co-design methodology; (2) increasing customization and heterogeneity of hardware architectures aggravates the already daunting challenges of application-developer productivity and performance-portability of software.
This project brings together researchers with expertise spanning the algorithm/software/hardware stack to address these challenges. The project’s impacts include (1) improved performance and energy efficiency of hardware architectures through algorithm-architecture co-design; (2) increased developer productivity for software applications and the performance achieved on a variety of target platforms, which enhances the benefits of computing technology in science and industry; (3) advances in scalable machine-learning and scientific computing applications.
The project makes contributions along multiple directions: (1) compiler optimization: powerful unified methodology for automated optimization of dense tensor computations, based on non-linear cost models for multi-level hyper-rectangular tiled execution on a range of target computing platforms; (2) scalability with sparsity: multi-level blocking methodology to enhance scalability of sparse-tensor computations, based on analysis of the intrinsic sparsity patterns of the data and the corresponding data-reuse patterns; (3) algorithm-architecture co-design: by leveraging new cost models, development of powerful and general new approaches for hardware-software co-design of accelerators for dense- and sparse-tensor computations; (4) correctness and accuracy: development of techniques to ensure correctness and floating-point accuracy with compiler transformations and compiler/hardware design-space exploration; (5) applications: use of the developed methodology and tools to advance cutting-edge applications in machine learning and scientific computing, including PDE solvers, quantum many-body simulation, tensor networks in machine learning, and large-scale image analysis.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of Utah
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant