Abstract: The accelerating demand for software development has catalyzed the exploration of AI-driven solutions that can automate programming tasks. This paper presents a comprehensive study on the application of transformer-based models for code generation, examining their ability to translate natural language descriptions and formal specifications into executable code. Leveraging leading benchmarks such as HumanEval, MBPP, CodeXGLUE, and CONCODE, we evaluate models across diverse tasks, including code summarization, translation, completion, clone detection, and defect prediction. Our findings reveal that transformer-based models demonstrate strong capabilities in capturing programming intent, generating context-aware code, and adapting to multiple programming languages. However, challenges persist in ensuring syntactic correctness, semantic alignment, and real-world usability of AI-generated code. We further discuss integration strategies for incorporating these models into existing software engineering workflows, emphasizing the need for human oversight, rigorous evaluation metrics, and security considerations. By synthesizing current advancements and limitations, this work contributes to the evolving field of code intelligence and highlights future directions for developing more robust, generalizable, and trustworthy AI systems for software development.

Index Terms: Code Generation, Transformer Networks, Artificial Intelligence, Software Automation, Natural Language Processing, Deep Learning.


PDF | DOI: 10.17148/IARJSET.2025.12334

Open chat