Document detail
ID

oai:arXiv.org:2407.04903

Topic
Computer Science - Computation and... Computer Science - Artificial Inte... Computer Science - Computer Vision...
Author
Li, Zekun Yang, Xianjun Choi, Kyuri Zhu, Wanrong Hsieh, Ryan Kim, HyeonJung Lim, Jin Hyuk Ji, Sungyoung Lee, Byungju Yan, Xifeng Petzold, Linda Ruth Wilson, Stephen D. Lim, Woosang Wang, William Yang
Category

Computer Science

Year

2024

listing date

10/16/2024

Keywords
data benchmarks captioning models computer science
Metrics

Abstract

The rapid development of Multimodal Large Language Models (MLLMs) is making AI-driven scientific assistants increasingly feasible, with interpreting scientific figures being a crucial task.

However, existing datasets and benchmarks focus mainly on basic charts and limited science subjects, lacking comprehensive evaluations.

To address this, we curated a multimodal, multidisciplinary dataset from peer-reviewed, open-access Nature Communications articles, spanning 72 scientific disciplines.

This dataset includes figures such as schematic diagrams, simulated images, macroscopic/microscopic photos, and experimental visualizations (e.g., western blots), which often require graduate-level, discipline-specific expertise to interpret.

We developed benchmarks for scientific figure captioning and multiple-choice questions, evaluating six proprietary and over ten open-source models across varied settings.

The results highlight the high difficulty of these tasks and the significant performance gap among models.

While many open-source models performed at chance level on the multiple-choice task, some matched the performance of proprietary models.

However, the gap was more pronounced in the captioning task.

Our dataset also provide valuable resource for training.

Fine-tuning the Qwen2-VL-2B model with our task-specific multimodal training data improved its multiple-choice accuracy to a level comparable to GPT-4o, though captioning remains challenging.

Continuous pre-training of MLLMs using our interleaved article and figure data enhanced their material generation capabilities, demonstrating potential for integrating scientific knowledge.

The dataset and benchmarks will be released to support further research.

;Comment: Code and data are available at https://github.com/Leezekun/MMSci

Li, Zekun,Yang, Xianjun,Choi, Kyuri,Zhu, Wanrong,Hsieh, Ryan,Kim, HyeonJung,Lim, Jin Hyuk,Ji, Sungyoung,Lee, Byungju,Yan, Xifeng,Petzold, Linda Ruth,Wilson, Stephen D.,Lim, Woosang,Wang, William Yang, 2024, MMSci: A Dataset for Graduate-Level Multi-Discipline Multimodal Scientific Understanding

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Diabetes and obesity: the role of stress in the development of cancer
stress diabetes mellitus obesity cancer non-communicable chronic disease stress diabetes obesity patients cause cancer