oai:arXiv.org:2405.16234
Computer Science
2024
10/2/2024
This paper explores capabilities of Vision Language Models on spreadsheet comprehension.
We propose three self-supervised challenges with corresponding evaluation metrics to comprehensively evaluate VLMs on Optical Character Recognition (OCR), spatial perception, and visual format recognition.
Additionally, we utilize the spreadsheet table detection task to assess the overall performance of VLMs by integrating these challenges.
To probe VLMs more finely, we propose three spreadsheet-to-image settings: column width adjustment, style change, and address augmentation.
We propose variants of prompts to address the above tasks in different settings.
Notably, to leverage the strengths of VLMs in understanding text rather than two-dimensional positioning, we propose to decode cell values on the four boundaries of the table in spreadsheet boundary detection.
Our findings reveal that VLMs demonstrate promising OCR capabilities but produce unsatisfactory results due to cell omission and misalignment, and they notably exhibit insufficient spatial and format recognition skills, motivating future work to enhance VLMs' spreadsheet data comprehension capabilities using our methods to generate extensive spreadsheet-image pairs in various settings.
Xia, Shiyu,Xiong, Junyu,Dong, Haoyu,Zhao, Jianbo,Tian, Yuzhang,Zhou, Mengyu,He, Yeye,Han, Shi,Zhang, Dongmei, 2024, Vision Language Models for Spreadsheet Understanding: Challenges and Opportunities