Détail du document
Identifiant

oai:arXiv.org:2405.16728

Sujet
Computer Science - Computer Vision... Computer Science - Artificial Inte... Computer Science - Machine Learnin... Computer Science - Multimedia
Auteur
Yu, Lijun
Catégorie

Computer Science

Année

2024

Date de référencement

29/05/2024

Mots clés
latent modalities generation models visual science computer
Métrique

Résumé

Advancements in language foundation models have primarily fueled the recent surge in artificial intelligence.

In contrast, generative learning of non-textual modalities, especially videos, significantly trails behind language modeling.

This thesis chronicles our endeavor to build multi-task models for generating videos and other modalities under diverse conditions, as well as for understanding and compression applications.

Given the high dimensionality of visual data, we pursue concise and accurate latent representations.

Our video-native spatial-temporal tokenizers preserve high fidelity.

We unveil a novel approach to mapping bidirectionally between visual observation and interpretable lexical terms.

Furthermore, our scalable visual token representation proves beneficial across generation, compression, and understanding tasks.

This achievement marks the first instances of language models surpassing diffusion models in visual synthesis and a video tokenizer outperforming industry-standard codecs.

Within these multi-modal latent spaces, we study the design of multi-task generative models.

Our masked multi-task transformer excels at the quality, efficiency, and flexibility of video generation.

We enable a frozen language model, trained solely on text, to generate visual content.

Finally, we build a scalable generative multi-modal transformer trained from scratch, enabling the generation of videos containing high-fidelity motion with the corresponding audio given diverse conditions.

Throughout the course, we have shown the effectiveness of integrating multiple tasks, crafting high-fidelity latent representation, and generating multiple modalities.

This work suggests intriguing potential for future exploration in generating non-textual data and enabling real-time, interactive experiences across various media forms.

;Comment: PhD thesis

Yu, Lijun, 2024, Towards Multi-Task Multi-Modal Models: A Video Generative Perspective

Document

Ouvrir

Partager

Source

Articles recommandés par ES/IODE IA

Lung cancer risk and exposure to air pollution: a multicenter North China case–control study involving 14604 subjects
lung cancer case–control air pollution never-smokers nomogram model controls lung-related 14604 subjects north polluted consistent smokers quit exposure lung cancer risk air people factor smoking pollution study history