Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Arizona State University |
| Country | United States |
| Start Date | Oct 01, 2024 |
| End Date | Sep 30, 2027 |
| Duration | 1,094 days |
| Number of Grantees | 2 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2431388 |
In the realm of fully autonomous data-driven manufacturing, generalizability of solutions facilitated by Artificial Intelligence (AI) is critical for scalable solutions. While significant progress has been made in industrial automation, transitioning from engineering requirements that begin with the manufacturing process plan specification to specific robotic control for manufacturing operations still requires significant manually configured input parameters.
These manually configured steps limit advances in data driven autonomous manufacturing, particularly within extreme environments, such as in outerspace, underwater, biological and radioactive environments. This project establishes a systematic and generalizable methodology to integrate vision language models and robotics to fully automate manufacturing assembly operations.
The project will provide solutions for the automated extraction of process task descriptions from engineering documentation, an integrated multi-agent task planning algorithm which assigns tasks and commands to robots with a digital twin guided real-time feedback evaluation system. The project will facilitate the integration of engineering design specifications, manufacturing process requirements, and robotic systems to improve productivity, especially in extreme environments, aligning well with the US National Strategy on the development and use of Artificial Intelligence in Manufacturing.
The integration of education and research will broaden participation in manufacturing by training the next generation of engineers and researchers at the intersection of manufacturing processes and robotics systems.
This project investigates an end-to-end generalizable task planning framework for robots in manufacturing environments by filling the knowledge gap between abstract process engineering design instructions and low-level robot control, leveraging the capability of vision-language models (VLMs) for high-level multimodal reasoning, and developing a customized task planner paired with a digital twin (DT) for iterative evaluation. The project consists of three highly integrated thrusts.
First, it involves interpreting Product Manufacturing Information and process instructions to derive high-level task sequences from various input forms such as text prompts, technical engineering drawings, 3D layouts, and structured data. This information is processed using a vision-based language model, which integrates computer vision and natural language processing to generate task sequences.
The second thrust addresses automated sub-goal planning in single or multi-robot manufacturing scenarios through a language-integrated task planner. This planner selects optimal sub-goals and assigns task primitives to robots. Finally, the project focuses on validating and correcting plans using the digital twin, which provides sensor feedback and evaluates trajectories before physical realization.
Advanced machine learning methods will be employed to ensure the validity of instructions sent to the manufacturing environment's physical asset endpoints.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Arizona State University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant