Loading…
Loading grant details…
| Funder | European Commission |
|---|---|
| Recipient Organization | Ceske Vysoke Uceni Technicke V Praze |
| Country | Czech Republic |
| Start Date | Jan 01, 2025 |
| End Date | Dec 31, 2026 |
| Duration | 729 days |
| Number of Grantees | 2 |
| Roles | Coordinator; Associated Partner |
| Data Source | European Commission |
| Grant ID | 101154126 |
Multimedia content is indispensable in our society, necessitating effective content management. A critical aspect of this is assessing the similarity between two multimedia items like images, videos, and documents.
LUSt's mission is to pioneer a universal similarity function capable of precisely measuring similarity across a broad spectrum of multimedia domains and tasks. Diverging from traditional problem-specific approaches prevalent in current literature, LUSt adopts a novel strategy.
LUSt plans to break down multimedia items into their constituent parts, including image regions, video frames, and text sentences.
Subsequently, a foundational model will be trained on input data comprising part similarities across various multimedia items. This strategic choice yields a universal input space with multiple advantages.
Firstly, it promotes seamless collaboration across different domains and tasks, facilitating joint training and mutual enhancement among tasks, which will be further enriched through multi-task learning techniques.
Secondly, it streamlines the integration of synthetic data during training, a key ingredient for large-scale training of a foundational model.
The model architecture is grounded in transformer-based deep learning modules and will be fortified by pioneering positional encodings rooted in kernel methods.
These positional encodings empower us to effectively manage the differing part topologies encountered across diverse domains -- a formidable challenge in itself. The work program commences by focusing on a single domain and task but is thoughtfully designed for extensibility.
The ultimate goal is creating a foundational model capable of accommodating all modalities -- visual, audio, text -- and supporting a broad range of similarity types, including uni-modal, cross-modal, and multi-modal scenarios.
LUSt's commitment to universality will be thoroughly validated through comprehensive benchmarking, spanning numerous tasks and domains.
Ceske Vysoke Uceni Technicke V Praze; Universiteit Van Amsterdam
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant