Loading…
Loading grant details…
| Funder | European Commission |
|---|---|
| Recipient Organization | Technische Universitaet Muenchen |
| Country | Germany |
| Start Date | Feb 01, 2025 |
| End Date | Jan 31, 2030 |
| Duration | 1,825 days |
| Number of Grantees | 1 |
| Roles | Coordinator |
| Data Source | European Commission |
| Grant ID | 101171131 |
In recent years, we have seen a revolution of learning methods that generate highly-realistic images, such as generative adversarial neural networks, autoregressive methods, or diffusion models (e.g., DALL-E, Stable Diffusion, Runway, etc).
Unfortunately, the vast majority of these methods are tailored towards the 2D image domain, while their respective 3D counterparts 3D models that fuel computer graphics applications, and enable visually immersive experiences remain in their infancy. In this proposal, we tackle the challenge of automatic generation of 3D content for virtual worlds.
Such 3D generated content enables versatility, with flexible rendering from arbitrary viewpoints that match the visual fidelity of the real world.
We focus on 3D content creation for visually immersive experiences for a much wider audience in myriad applications, such as video games, movies, AR/VR scenarios, CAD modeling, architectural & industrial design, and medical applications.
We believe that the key towards automated, high-fidelity content creation lies in developing new machine learning techniques to transform 3D content generation.(A) We will develop 3D Generative Models that output 3D polygon meshes, along with their surface textures and material properties, highlighting generation of 3D content that can be directly consumed by modern graphics pipelines.(B) To train our 3D generative models to reflect the complexity and diversity of real data, we will devise methods for Supervision from Images and Videos.
The key challenge here is that such collections of images and videos are by nature incomplete projections of the underlying 3D world, thus requiring learning paradigms that generalize across partial instances.(C) We will research techniques that provide Control and Editability through Conditional Generation.
In particular, we will focus on conditional input from both novice (e.g., text-based editing) and expert (e.g., based on existing authoring tools) users alike.
Technische Universitaet Muenchen
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant