Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Tufts University |
| Country | United States |
| Start Date | Jun 01, 2025 |
| End Date | May 31, 2030 |
| Duration | 1,825 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2440353 |
Ensuring software is secure is a fundamental challenge in today's technology-driven world. To improve software security, software development best practices recommend that developers begin development with security in mind by following a structured process called "threat modeling". Threat modeling is a structured brainstorming process where developers review a system's parts, asking what could go wrong and how they could fix it, identifying threats and mitigations, respectively.
There are many recommended threat modeling processes, but it is not clear which are best. Developing guidelines and support for this essential process requires understanding relevant human decision-making and collaborative problem solving. There have been some efforts to study threat modeling practice, but these have either been very expensive or the authors have had to make design decisions that reduce study costs but potentially limit result reliability.
This project includes experiments comparing experimental design approaches to assess their effects. This approach will help future researchers design reliable threat modeling experiments while minimizing study cost. The resulting best practices will be shared with threat modeling researchers and incorporated into professional education as well as integrated into courses in security and software systems engineering.
This project will increase threat modeling research reliability by empirically evaluating best practices and tradeoffs for experiment design. Researchers are undertaking qualitative studies and controlled experiments in four areas: (1) investigations of current threat modeling practices in real-world settings, (2) experiments assessing the impact of task design, such as, the level of system specification detail, (3) experiments assessing the study environment's effect, including participants' security expertise, and (4) comparisons of measures used to assess threat modeling performance.
The results are being combined into research guidelines for human-centric threat modeling, which can be used as a reference for future researchers to help them develop more reliable results to improve threat modeling practice and software security generally.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Tufts University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant