Loading…
Loading grant details…
| Funder | Engineering and Physical Sciences Research Council |
|---|---|
| Recipient Organization | University of Sheffield |
| Country | United Kingdom |
| Start Date | Jan 07, 2022 |
| End Date | Jul 05, 2025 |
| Duration | 1,275 days |
| Number of Grantees | 2 |
| Roles | Student; Supervisor |
| Data Source | UKRI Gateway to Research |
| Grant ID | 2784464 |
Established methods for robotic control are inflexible in adapting to new tasks.
Recently, deep neural network based methods for interactive control, termed reinforcement learning, have shown promise in self-learning to solve tasks. However, they require a huge number of, often random, interactions with the environment for each new task.
On the contrary, human brains learn models of their bodies and the environment to efficiently predict and plan their decisions and movements, and can adapt online. Extant model-based schemes for control have been hampered by poorly learned models.
This project will distill and improve diverse advances in cognitively-inspired model-based reinforcement learning to enable robots to self-learn new tasks and adapt to perturbations fast and flexibly.
We will learn a multi-level model of a compliant robotic arm and its environment, and then use this model for planning and control.
This architecture will enable the robot to self-learn to attain goal states, via planning at a higher, human-interpretable level on its internal model with minimal real-world interactions, and also to adapt online.
The student will benchmark the architecture on accurate reaching, and stacking blocks, building towards industrial use cases.
University of Sheffield
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant