Abstract: Major standard languages such as Codex have demonstrated the ability to generate code for a wide variety of tasks. However, current models have limited performance, especially on complex tasks. One reason for this is that the language model does not understand the context of the program, causing the program to be buggy or even fail. In this article, we investigate whether automatic correction (APR) can correct the solutions produced by the language model in the LeetCode competition. The aim is to investigate whether the APR technique can increase the reliability of code generated from large language samples. Our research shows that: (1) retrieved developer code shows faults in man-made solutions and shows that APR technology can fix development rights; (2) updated Codex format that supports code fixing similar to or better than the existing Java fixing tool TBar and Logger for providing evidence of bug location, providing bug information, fixing bugs. By analyzing the experimental results produced by this tool, we make several recommendations: (1) The APR tool needs to be improved to overcome the limitation of patch area location (e.g., displaying more local crime); (2) Due to the large size of the language model, more correction models can be obtained by training more data, and future APR tools can shift the focus by adding additional models to the link structure/content, (3) By combining the language structure with APR, it is amenable to link, learning.

Keywords: Bug fixing techniques, automated and semi-automated, solutions, testings.


PDF | DOI: 10.17148/IARJSET.2024.11414

Open chat