Loading…
Loading grant details…
| Funder | Forte |
|---|---|
| Recipient Organization | Lund University |
| Country | Sweden |
| Start Date | Jan 01, 2025 |
| End Date | Dec 31, 2027 |
| Duration | 1,094 days |
| Number of Grantees | 3 |
| Roles | Co-Investigator; Principal Investigator |
| Data Source | Swedish Research Council |
| Grant ID | 2024-00831_Forte |
Research problem and specific questions Artificial intelligence (AI) is revolutionizing healthcare by automating tasks that previously required expert knowledge. The benefits are numerous, but there are also risks. One important risk is the risk of a new form of discrimination: algorithmic discrimination.
This project explores how data science methods, known as algorithmic fairness measures, can help prevent algorithmic discrimination and promote greater equity in healthcare.Data and Method Through legal and philosophical collaboration, we will clarify what discrimination law requires of AI-based decision support systems used in healthcare and identify the methods best suited to prevent algorithmic discrimination.
The project also includes an empirical investigation (policy analysis, interviews, and observation studies) into how AI developers and healthcare personnel working with AI-based triage and mammography manage fairness and discrimination issues.The project’s interdisciplinary approach makes it innovative and unprecedented.
Previous research has primarily been conducted by data scientists and lacks thorough philosophical and legal analysis, why it remains unclear to what extent existing fairness methods address ethically unacceptable and legally prohibited discrimination.Societal relevance and utilization The study is timely given the rapid pace of technological advancements and the forthcoming EU regulation of AI (the AI Act).
Enhanced understanding of legal requirements and practical strategies to prevent algorithmic discrimination is imperative for ensuring the effectiveness of non-discrimination norms.
To ensure that our results benefit the professional groups developing and utilizing automated decision support, we have connected key representatives from Region Halland and AI-supported healthcare initiatives to the project.Plan for project realisation The research group includes a legal scholar (Nilsson, 45%), a philosopher (Jönsson, 30%), a socio-legal scholar (Larsson, 15%), and a postdoctoral researcher (100% for one year).
Our joint expertise in discrimination law, AI, and socio-legal studies, makes us uniquely placed to address algorithmic discrimination in healthcare.
Some of Sweden´s internationally renowned researchers and practitioners in medicine, data science, philosophy, and medical ethics are included in the project as an expert panel, ensuring the highest scientific standards and the utilization of the project´s results.
Lund University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant