Loading…

Loading grant details…

Completed STANDARD GRANT National Science Foundation (US)

I-Corps: Translation Potential of a Pseudorandom Renderer for Creating Visualizations

$500K USD

Funder National Science Foundation (US)
Recipient Organization University of California-Berkeley
Country United States
Start Date Dec 01, 2024
End Date Nov 30, 2025
Duration 364 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2435575
Grant Description

The broader impact/commercial potential of this I-Corps project is based on the development of a software tool that integrates semantic information into three-dimensional (3D) models, significantly lowering the barriers to creating visualizations in the architecture, engineering, and construction industries. This tool allows professionals to generate high-quality, contextually relevant visualizations more efficiently, improving the design process by enabling quicker iterations and more accurate project outcomes.

By automating and simplifying the generation of these visualizations, the tool reduces the time and cost associated with traditional methods, making advanced design capabilities more accessible to a broader range of professionals, including smaller firms and independent designers. Moreover, the enhanced accessibility and intuitive nature of this tool encourages greater engagement from non-experts, such as community members, in the design process.

This increased participation can lead to projects that better reflect the needs and desires of the communities they serve, ultimately resulting in more successful, inclusive, and sustainable designs.

This I-Corps project utilizes experiential learning coupled with first-hand investigation of the industry ecosystem to assess the translation potential of the proposed technology. It is based on the prior development of a method for integrating semantic information and text prompts into three-dimensional (3D) models to enhance the visualization and design process.

The core innovation lies in the ability to assign semantic data at multiple levels of geometric elements—ranging from textures to entire objects—and to utilize these assignments to inform and refine conditional image synthesis. This approach leverages advancements in machine learning and artificial intelligence (AI) to create visual outputs that are more accurate and more contextually relevant to the specific design scenario.

By combining map-based and text-based information, the technology enables a more sophisticated and user-friendly workflow, allowing designers to generate complex visualizations with minimal manual input. The research underpinning this project has demonstrated the feasibility of this approach, showing that it can enhance the efficiency and effectiveness of digital design tools.

This innovation has the potential to impact the way architects and engineers approach the visualization phase of design, leading to the development of more advanced automated design systems that can adapt to a wide range of scenarios.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

University of California-Berkeley

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant