Loading…

Loading grant details…

Completed RESEARCH GRANT UKRI Gateway to Research

DIADEM: debugging made dependable and measurable

£3.25M GBP

Funder Engineering and Physical Sciences Research Council
Recipient Organization King's College London
Country United Kingdom
Start Date May 15, 2022
End Date Apr 15, 2025
Duration 1,066 days
Number of Grantees 1
Roles Principal Investigator
Data Source UKRI Gateway to Research
Grant ID EP/W012308/1
Grant Description

Software quality is increasingly critical to most of humanity. Bugs in software claim a huge annual toll, financially and even in human life. To eliminate bugs, developers depend crucially on their tools.

Tools for interactive debugging are vital. They alone provide a high- (source-) level view of a running (binary) program, enabling programmers to 'see the bug' as it occurs in the program running in front of them. However, debugging infrastructure is notoriously unreliable, as it works only if various metadata is complete and correct.

If not, the programmer sees a partial or incorrect view, which may be useless or actively misleading.

These problems occur often in popular languages (e.g. C, C++, Rust, Go), owing to a tension between debuggability and optimisation. Debugging in these languages works by compiler-generated /metadata/ describing how binary (executable) code relates to source (human-written) code.

Metadata generation is 'best-effort', and optimisation frequently introduces flaws -- but simply disabling optimisations is seldom an option. Programmers rely on optimisations to relieve them of much hand-tuning. Without them, code may run tens of times slower.

Furthermore, some bugs appear only in optimised code, owing to undefined behaviour (underspecification) in the source language.

This problem is extremely challenging. The heart of it is that writing compiler optimisations that preserve debuggability demands extra effort, on what are already intricate code transformations ('passes'). In practice corners are cut, leaving the output metadata approximate.

To be acceptable to pass authors, improvements must reshape the effort/reward curve without increasing the task's baseline complexity. Unlike performance, debugging so far lacks quantitative benchmarks, so compiler authors have not prioritised or competed on debuggability.

Existing techniques amount to interaction-based testing of debugging, often with features for narrowing down which passes introduced a flaw. This is haphazard, since exploring all metadata includes the already-hard problem of achieving full-coverage tests (to 'drive' the debugger over all program locations). We propose instead to analyse metadata as an artifact in its own right.

This means instead of tests that interact with a single concrete execution through a debugger, we must devise a custom systematic, symbolic method for exploring the compiled code, evaluating the correctness of metadata in a mathematical manner. Unlike haphazard testing, this promises systematic measurement of lost coverage and correctness; the latter can (we hypothesise) be automated using recent advances in formal specification of source languages, namely /executable semantics/, as a replacement for the current manual practices.

This idea of parallel source- and binary-level exploration also suggests a radical approach: post-hoc synthesis of metadata, relieving the compiler of generating it at all. The idea here builds on successful work on neighbouring problems (translation validation and decompilation).

The project will proceed by practical methods, experimenting on a real production compiler (LLVM). It will build novel tools embeddable into existing compiler-testing workflows, both to diagnose compiler bugs and to quantify the improvement from fixing them. It will empirically explore abstractions and helpers used internally in compilers, to devise designs making them measurably more debug-preserving.

Finally it will build a novel tool exploring the radical idea of synthesising high-quality metadata in post-hoc fashion, outside the compiler. It will develop metrics allowing quantitative comparison against traditional approaches. The beneficiaries are on many levels: compiler authors, software developers at large, and the general public who use or depend on the affected software.

All Grantees

King's College London

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant