oai:arXiv.org:2407.04078
Computer Science
2024
24/7/2024
Large language models (LLMs) have made impressive progress in handling simple math problems, yet they still struggle with more challenging and complex mathematical tasks.
In this paper, we introduce a series of LLMs that employs the Decomposition of thought with code assistance and self-correction for mathematical reasoning, dubbed as DotaMath.
DotaMath models tackle complex mathematical tasks by decomposing them into simpler logical subtasks, leveraging code to solve these subtasks, obtaining fine-grained feedback from the code interpreter, and engaging in self-reflection and correction.
By annotating diverse interactive tool-use trajectories and employing query evolution on GSM8K and MATH datasets, we generate an instruction fine-tuning dataset called DotaMathQA with 574K query-response pairs.
We train a series of base LLMs using imitation learning on DotaMathQA, resulting in DotaMath models that achieve remarkable performance compared to open-source LLMs across various in-domain and out-of-domain benchmarks.
Notably, DotaMath-deepseek-7B showcases an outstanding performance of 64.8% on the competitive MATH dataset and 86.7% on GSM8K.
Besides, DotaMath-deepseek-7B maintains strong competitiveness on a series of in-domain and out-of-domain benchmarks (Avg. 80.1%).
Looking forward, we anticipate that the DotaMath paradigm will open new pathways for addressing intricate mathematical problems.
Our code is publicly available at https://github.com/ChengpengLi1003/DotaMath.
;Comment: Work in progress
Li, Chengpeng,Dong, Guanting,Xue, Mingfeng,Peng, Ru,Wang, Xiang,Liu, Dayiheng, 2024, DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical Reasoning