Loading…

Loading grant details…

Active STUDENTSHIP UKRI Gateway to Research

Automated Detection of Spinal Disorders Using MRI Scans and Radiology Reports


Funder Engineering and Physical Sciences Research Council
Recipient Organization University of Oxford
Country United Kingdom
Start Date Sep 30, 2022
End Date Sep 29, 2026
Duration 1,460 days
Number of Grantees 2
Roles Student; Supervisor
Data Source UKRI Gateway to Research
Grant ID 2721784
Grant Description

Accurate and timely evaluations of magnetic resonance imaging (MRI) scans of the spine are necessary to detect cancerous spine tumours and monitor progressive degenerative conditions. Back pain affects 80% of people in the UK over their lifetime, yet the causes of back pain are not fully understood. Delayed diagnosis or treatment of spinal conditions has been associated with worse outcomes-pain and disability for the patient and more expensive and complicated treatments at later stages of disease.

Advancements in computer vision and medical imaging have led to the development of tools that can produce automated vertebral segmentations and radiological gradings. However, these systems are not meant to be used as diagnosis tools and presently only generate qualitative evaluations of disease progression. Currently, there is no tool in clinical practice that can replicate the function of a radiologist for any spinal disorder.

This project falls within the EPSRC healthcare technologies research area. We aim to develop a system that is closer to the way radiologists work by incorporating both vision and text data. Radiology reports are rich with details that are missing from tabular annotations traditionally used in medical imaging tasks, which will reduce the need for manual annotation of the MRIs and preserve complicated and overlapping symptomologies often present in clinical practice.

We will explore using the reports to create normalised structured labels, as well as training bi-modal vision-language models (VLM) on a newly compiled dataset from the National Consortium of Intelligent Medical Imaging (NCIMI), which includes around 5,000 spinal MRI scans paired with the accompanying radiology reports.

Specifically, the aims of the research are:

1. Structured supervision from reports: We will use new, multi-centre data from NCIMI to detect spinal cancer. The data contain many paired image-report samples from patients with cancer.

We will use the reports to create structured labels similar to those generated by manual annotation and use descriptions of primary or secondary incidence of cancer from the reports to identify corresponding sites in the images. We will set aside data from a single centre to validate on unseen data and demonstrate usefulness in real clinical settings.

2. Visual-language modelling: We will adapt vision-language models to use the free-text reports directly, without restructuring them to tabular form. Using the reports as direct inputs to the model should result in better performance in the classification of spinal conditions than using images alone.

Details from the reports may also help us pinpoint the specific location of abnormalities in the images without the need of manual annotation. Incorporating text inputs would ideally allow for an interactive system that can answer user questions with natural language responses when prompted.

3. Quantitative measurements: We will develop more quantitative biomarkers for spinal conditions. Grading systems currently used by radiologists and incorporated into SpineNet are qualitative and subject to large inter-rater variability. Getting objective metrics of disease progression from accurately predicted segmentations can define more precise phenotypes of disease stages, which would be useful in supporting clinical studies and drug trials.

The proposed studies in combination will form a system that can take both image and text inputs to detect abnormalities in the spine and generate narrative responses to questions posed by the user, with quantitative biomarkers that can be linked to disease phenotypes. The successful development and deployment of such a tool would enable health practitioners to deliver care at greater efficiency, reduce costs and lead to better patient outcomes.

All Grantees

University of Oxford

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant