detalle del documento
IDENTIFICACIÓN

oai:arXiv.org:2410.09414

Tema
Computer Science - Software Engine...
Autor
Zhong, Zhiyuan Wang, Sinan Wang, Hailong Wen, Shaojin Guan, Hao Tao, Yida Liu, Yepang
Categoría

Computer Science

Año

2024

fecha de cotización

16/10/2024

Palabras clave
jsontestgen generation json tests data bugs libraries unit
Métrico

Resumen

Data-serialization libraries are essential tools in software development, responsible for converting between programmable data structures and data persistence formats.

Among them, JSON is the most popular choice for exchanging data between different systems and programming languages, while JSON libraries serve as the programming toolkit for this task.

Despite their widespread use, bugs in JSON libraries can cause severe issues such as data inconsistencies and security vulnerabilities.

Unit test generation techniques are widely adopted to identify bugs in various libraries.

However, there is limited systematic testing effort specifically for exposing bugs within JSON libraries in industrial practice.

In this paper, we propose JSONTestGen, an approach leveraging large language models (LLMs) to generate unit tests for fastjson2, a popular open source JSON library from Alibaba.

Pre-trained on billions of open-source text and code corpora, LLMs have demonstrated remarkable abilities in programming tasks.

Based on historical bug-triggering unit tests, we utilize LLMs to generate more diverse test cases by incorporating JSON domain-specific mutation rules.

To systematically and efficiently identify potential bugs, we adopt differential testing on the results of the generated unit tests.

Our evaluation shows that JSONTestGen outperforms existing test generation tools in unknown defect detection.

With JSONTestGen, we found 34 real bugs in fastjson2, 30 of which have already been fixed, including 12 non-crashing bugs.

While manual inspection reveals that LLM-generated tests can be erroneous, particularly with self-contradictory assertions, we demonstrate that LLMs have the potential for classifying false-positive test failures.

This suggests a promising direction for improved test oracle automation in the future.

Zhong, Zhiyuan,Wang, Sinan,Wang, Hailong,Wen, Shaojin,Guan, Hao,Tao, Yida,Liu, Yepang, 2024, Advancing Bug Detection in Fastjson2 with Large Language Models Driven Unit Test Generation

Documento

Abrir

Compartir

Fuente

Artículos recomendados por ES/IODE IA

Hespi: A pipeline for automatically detecting information from hebarium specimen sheets
science recognition institutional detects text-based text pipeline specimen