oai:arXiv.org:2403.15789
Computer Science
2024
3/27/2024
We introduce in-context matting, a novel task setting of image matting.
Given a reference image of a certain foreground and guided priors such as points, scribbles, and masks, in-context matting enables automatic alpha estimation on a batch of target images of the same foreground category, without additional auxiliary input.
This setting marries good performance in auxiliary input-based matting and ease of use in automatic matting, which finds a good trade-off between customization and automation.
To overcome the key challenge of accurate foreground matching, we introduce IconMatting, an in-context matting model built upon a pre-trained text-to-image diffusion model.
Conditioned on inter- and intra-similarity matching, IconMatting can make full use of reference context to generate accurate target alpha mattes.
To benchmark the task, we also introduce a novel testing dataset ICM-$57$, covering 57 groups of real-world images.
Quantitative and qualitative results on the ICM-57 testing set show that IconMatting rivals the accuracy of trimap-based matting while retaining the automation level akin to automatic matting.
Code is available at https://github.com/tiny-smart/in-context-matting ;Comment: Accepted to CVPR 2024.
Code is available at https://github.com/tiny-smart/in-context-matting
Guo, He,Ye, Zixuan,Cao, Zhiguo,Lu, Hao, 2024, In-Context Matting