Document detail
ID

oai:arXiv.org:2403.12455

Topic
Computer Science - Computer Vision...
Author
Zhu, Wenqi Cao, Jiale Xie, Jin Yang, Shuangming Pang, Yanwei
Category

Computer Science

Year

2024

listing date

10/16/2024

Keywords
set query matching weighted categories using mask classification clip open-vocabulary scores instance
Metrics

Abstract

Open-vocabulary video instance segmentation strives to segment and track instances belonging to an open set of categories in a videos.

The vision-language model Contrastive Language-Image Pre-training (CLIP) has shown robust zero-shot classification ability in image-level open-vocabulary tasks.

In this paper, we propose a simple encoder-decoder network, called CLIP-VIS, to adapt CLIP for open-vocabulary video instance segmentation.

Our CLIP-VIS adopts frozen CLIP and introduces three modules, including class-agnostic mask generation, temporal topK-enhanced matching, and weighted open-vocabulary classification.

Given a set of initial queries, class-agnostic mask generation introduces a pixel decoder and a transformer decoder on CLIP pre-trained image encoder to predict query masks and corresponding object scores and mask IoU scores.

Then, temporal topK-enhanced matching performs query matching across frames using the K mostly matched frames.

Finally, weighted open-vocabulary classification first employs mask pooling to generate query visual features from CLIP pre-trained image encoder, and second performs weighted classification using object scores and mask IoU scores.

Our CLIP-VIS does not require the annotations of instance categories and identities.

The experiments are performed on various video instance segmentation datasets, which demonstrate the effectiveness of our proposed method, especially for novel categories.

When using ConvNeXt-B as backbone, our CLIP-VIS achieves the AP and APn scores of 32.2% and 40.2% on the validation set of LV-VIS dataset, which outperforms OV2Seg by 11.1% and 23.9% respectively.

We will release the source code and models at https://github.com/zwq456/CLIP-VIS.git.

;Comment: Accepted by IEEE TCSVT

Zhu, Wenqi,Cao, Jiale,Xie, Jin,Yang, Shuangming,Pang, Yanwei, 2024, CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Elliptical Attention
language weights elliptical computer
Benefit finding among family caregivers of patients with advanced cancer in a palliative treatment: a qualitative study
benefit finding advanced cancer palliative caregivers care positive qualitative study cancer positive patients caregivers