Document detail
ID

oai:arXiv.org:2409.14121

Topic
Computer Science - Software Engine... Computer Science - Machine Learnin... D.2 D.3
Author
Zhang, Qingyu Su, Liangcai Ye, Kai Qian, Chenxiong
Category

Computer Science

Year

2024

listing date

9/25/2024

Keywords
resolution performance dataset llms conflicts
Metrics

Abstract

Resolving conflicts from merging different software versions is a challenging task.

To reduce the overhead of manual merging, researchers develop various program analysis-based tools which only solve specific types of conflicts and have a limited scope of application.

With the development of language models, researchers treat conflict code as text, which theoretically allows for addressing almost all types of conflicts.

However, the absence of effective conflict difficulty grading methods hinders a comprehensive evaluation of large language models (LLMs), making it difficult to gain a deeper understanding of their limitations.

Furthermore, there is a notable lack of large-scale open benchmarks for evaluating the performance of LLMs in automatic conflict resolution.

To address these issues, we introduce ConGra, a CONflict-GRAded benchmarking scheme designed to evaluate the performance of software merging tools under varying complexity conflict scenarios.

We propose a novel approach to classify conflicts based on code operations and use it to build a large-scale evaluation dataset based on 44,948 conflicts from 34 real-world projects.

We evaluate state-of-the-art LLMs on conflict resolution tasks using this dataset.

By employing the dataset, we assess the performance of multiple state-of-the-art LLMs and code LLMs, ultimately uncovering two counterintuitive yet insightful phenomena.

ConGra will be released at https://github.com/HKU-System-Security-Lab/ConGra.

Zhang, Qingyu,Su, Liangcai,Ye, Kai,Qian, Chenxiong, 2024, CONGRA: Benchmarking Automatic Conflict Resolution

Document

Open

Share

Source

Articles recommended by ES/IODE AI