Loading…

Loading grant details…

Completed RESEARCH GRANT UKRI Gateway to Research

The 'risk of risk': remodelling artificial intelligence algorithms for predicting child abuse.

£2M GBP

Funder Economic and Social Research Council
Recipient Organization Aston University
Country United Kingdom
Start Date Apr 30, 2022
End Date Apr 29, 2024
Duration 730 days
Number of Grantees 3
Roles Co-Investigator; Principal Investigator
Data Source UKRI Gateway to Research
Grant ID ES/R00983X/2
Grant Description

Child protection in the UK relies heavily on risk prediction, an area of growing interest in the UK since the late 1980s (Browne & Saqi 1988, Creighton 1992). It is generally taken as an axiom that child abuse can and should be detected via risk prediction to identify vulnerable and risky families whose children may become abused or neglected. The purpose of identifying such families at an early stage is to target early intervention towards them to reduce the risk of abuse.

To service this need, individual local authorities commission algorithmic risk prediction systems from profit making providers. The question this proposed project addresses is whether such systems are 'fit for purpose' given the concerning longitudinal data showing poor accuracy in child protection outcomes and an unacceptably high number of false positives and false negatives in risk prediction. This concern was recently highlighted by the President of the Family Division (Munby 2016).

This proposed project addresses the issue by looking at the possibilities for a new method of predicting risk in a more realistic way that provides a better means for child protection systems to be supported by them, rather than have to work potentially inaccurate data. It sets out a new and transformative means of collating, assessing and extracting consistent information from previous studies and testing them in a consistent and reliable way.

The potential exists for scoping a new system which moves algorithmic risk prediction into new territory; existing systems do not 'learn' from these errors so the technology stalls at the stage of algorithmic prediction rather than developing into evidenced-based, reliable and responsive artificial intelligence (AI).

The key research questions/objectives are: - What is a normalised confidence limit(s) in existing risk prediction studies in child protection; - To develop a new method of calculating risk, and design for its application in child protection;

- To assess the possibility of designing a model for a new, GDPR-compliant, AI model of risk prediction suitable for use in pre- and post-proceedings child protection work.

This study's methodology is transformative, bringing together a mix of traditional and pioneering methods. Each stage of the methodology has been assessed for the level of potential transformation in either its approach and/or outcome. The team will start the proposed project by creating the first, comprehensive and re-usable database of previous relevant studies.

The creative and new methods employed by the rest of the study is higher risk, but if successful will yield a correspondingly high reward. Having created the database of studies, the team will analyse their characteristics, size, scope and methods to apply a consistent means of calculating their power ratio, creating a comparative analysis including strengths, weaknesses and confidence limits.

These results will be analysed using Bayesian statistics in the context of Eggleston's work in respect of the use of probability in fact finding processes (Eggleston 1983). Bayesian networks provide a novel means of establishing criteria for weighting of evidence for social and technical problems including reasoning (using the Bayesian inference algorithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).

Probabilistic algorithms can also be used for filtering, prediction, finding explanations for datastreams, and helping systems to analyse processes over time. Used in this context, we will provide a consistent measure of confidence across risk-factors and measure of their evidential probity. This core transformative element of our methods will enable scoping of a risk prediction system to take account of strengths and weaknesses, including identifying gaps, providing a reliable legal indicator to courts as to the appropriate weighting as a project outcome.

All Grantees

University of the West of England; Lancaster University; Aston University

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant