Loading…

Loading grant details…

Active STANDARD GRANT National Science Foundation (US)

CRII: SHF: Advancing Sustainable Software Engineering Practices with Energy-Efficient Large Language Models for Code

$1.75M USD

Funder National Science Foundation (US)
Recipient Organization College of William and Mary
Country United States
Start Date Jun 01, 2025
End Date May 31, 2027
Duration 729 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2451058
Grant Description

In recent years, software engineering has undergone a significant transformation with the integration of artificial intelligence into software development workflows. As part of this evolution, large language models have proven to be powerful assets, enabling the automation of various software engineering tasks. Collectively known as Large Language Models for Code (LLMc), these models have been effectively utilized to assist developers in fixing bugs, code generation, software documentation, software testing, and code review, among other practices.

The success of LLMc is largely attributed to advancements in computational hardware and the growing availability of large-scale training datasets. However, the increasing reliance on LLMc has also brought to light significant concerns regarding sustainability and environmental impact. Training and deploying LLMc demands extensive computational resources, resulting in significant energy consumption, high costs, and substantial carbon emissions, posing challenges to their long-term sustainability.

To address these challenges, this project aims to lay the groundwork for developing sustainable and cost-effective artificial intelligence methods in software engineering automation by enhancing the efficiency of LLMc. The project will integrate its research findings into computer science academic courses, which will help equip future software engineers with the knowledge and tools necessary for sustainable adoption of LLMc in software engineering practices.

The proposal focuses on two key strategies: (i) optimizing training data by filtering out low-quality instances using software engineering task-specific metrics, thus reducing computational costs while preserving learning capabilities, and (ii) applying model compression techniques, particularly quantization, to significantly decrease model size and resource consumption without compromising performance. Preliminary research has shown the effectiveness of these methods in improving efficiency for code-related tasks such as code generation and summarization.

Building on these insights, this project will expand such optimizations to a wider range of software engineering automation tasks, ensuring their applicability across various scenarios. By establishing a structured methodology to improve LLMc efficiency, this research will offer practical implementation strategies, technical recommendations, and a comprehensive assessment of sustainability-focused optimizations for artificial intelligence-driven software engineering tools.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

College of William and Mary

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant