Loading…

Loading grant details…

Active STUDENTSHIP UKRI Gateway to Research

Development of an Adaptive Augmented Reality Interface for Medical Robotic Systems


Funder Engineering and Physical Sciences Research Council
Recipient Organization University of Sussex
Country United Kingdom
Start Date Sep 30, 2024
End Date Mar 30, 2028
Duration 1,277 days
Number of Grantees 1
Roles Supervisor
Data Source UKRI Gateway to Research
Grant ID 2923198
Grant Description

The ageing of society experienced in the last few decades has increased the demand for medical services concurrently with the decrease of the active population, putting under strain the economic sustainability of the current level of welfare in societies. Such a phenomenon impacts the entirety of healthcare service quality and accessibility and not only geriatric healthcare because a smaller active population also implies a reduction in the pool of medical operators and, consequently.

This issue is already being experienced in the UK's NHS (as in other countries), where there is overworked staff and generally reduced service accessibility. Robot technologies could improve the financial sustainability of healthcare systems by providing remote and autonomous services. In addition, developing these automated systems could facilitate the delivery of medical services in remote areas, increasing their quality and thus reducing the incidence of service deprivation pockets.

Furthermore, they can be of assistance in providing specialised emergency care on the field in case of emergency.

It has been recently shown that with recent advances in teleoperation, it is possible to perform rehabilitation, ultrasound scans, and use a scalpel with minimal adjustment to the controller. However, there are still advancements to be made to the human-robot interface to improve user awareness (i.e., visual and haptic perception) and robot programming.

This project aims to develop the interfaces to enable off-the-shelf robotic arms to perform multiple medical tasks (surgery, diagnostics, physical therapy, etc.); crucially, we aim to enable such flexibility without relying on a field engineer to reprogram the robot every time it switches to a different task. This is essential to enable the robot to be deployed in a geo-distributed network of semi-autonomous medical centres or in medical transports for remote diagnostics.

As mentioned above, recent studies have indicated limitations in the user interface due to limited visuals that hinder the dexterity of the system in 3D dynamic tasks, such as assisting a patient during rehabilitation or conducting an ultrasound scan. Figure 1 shows the schematic of the entire architecture required for this system to work, but scholarship only targets the development of the visual interface for 3D vision and augmented reality supported by a shared autonomy power by AI algorithms.

The objectives of this research project are:

O1) Integration of the stereovision with the augmented reality headset to allow the operator to monitor the working area. This includes the coordination of the head movement with the motion of the stereovision camera to allow the user to move the field of view by adjusting their head.

O2) Integration of the controller and tools (e.g., ultrasound probes) interfaces with the augmented reality environment and the user interface layout development.

O3) Development of Augmented Reality AI Algorithms to support the tasks selected with the clinical partner and integration with the interface developed in O2. O4) System validation using our clinical partners and healthy subjects.

All Grantees

University of Sussex

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant