Document detail
ID

oai:arXiv.org:2408.00998

Topic
Computer Science - Computer Vision... Computer Science - Artificial Inte...
Author
Gao, Xiang Liu, Jiaying
Category

Computer Science

Year

2024

listing date

8/14/2024

Keywords
guiding text-to-image text-driven computer model band plug-and-play t2i image frequency reference
Metrics

Abstract

Large-scale text-to-image diffusion models have been a revolutionary milestone in the evolution of generative AI and multimodal technology, allowing wonderful image generation with natural-language text prompt.

However, the issue of lacking controllability of such models restricts their practical applicability for real-life content creation.

Thus, attention has been focused on leveraging a reference image to control text-to-image synthesis, which is also regarded as manipulating (or editing) a reference image as per a text prompt, namely, text-driven image-to-image translation.

This paper contributes a novel, concise, and efficient approach that adapts pre-trained large-scale text-to-image (T2I) diffusion model to the image-to-image (I2I) paradigm in a plug-and-play manner, realizing high-quality and versatile text-driven I2I translation without any model training, model fine-tuning, or online optimization process.

To guide T2I generation with a reference image, we propose to decompose diverse guiding factors with different frequency bands of diffusion features in the DCT spectral space, and accordingly devise a novel frequency band substitution layer which realizes dynamic control of the reference image to the T2I generation result in a plug-and-play manner.

We demonstrate that our method allows flexible control over both guiding factor and guiding intensity of the reference image simply by tuning the type and bandwidth of the substituted frequency band, respectively.

Extensive qualitative and quantitative experiments verify superiority of our approach over related methods in I2I translation visual quality, versatility, and controllability.

The code is publicly available at: https://github.com/XiangGao1102/FBSDiff.

;Comment: Accepted conference paper of ACM MM 2024

Gao, Xiang,Liu, Jiaying, 2024, FBSDiff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation

Document

Open

Share

Source

Articles recommended by ES/IODE AI

MELAS: Phenotype Classification into Classic-versus-Atypical Presentations
presentations mitochondrial strokelike patients variability phenotype clinical melas
Protocol for the promoting resilience in stress management (PRISM) intervention: a multi-site randomized controlled trial for adolescents and young adults with advanced cancer
cancer quality of life anxiety depression hope coping skills communication intervention randomized ayas outcomes resilience care trial cancer prism-ac advanced