Abstract: Object detection is a fundamental task in computer vision that plays a vital role in various applications, including surveillance, autonomous driving and robotics. This journal paper represents an innovative approach to object detection using the YOLOv4 (You Only Look Once version 4) algorithm, aimed at improving visual perception and achieving state-of-the-art results [1][9]. The proposed technique leverages the power of deep neural networks and machine learning to detect and localize objects within an image with remarkable accuracy and efficiency [3]. YOLOv4, as a highly optimized object detection framework, combines advanced techniques such as feature pyramid networks, multi-scale prediction, and enhanced training strategies to enhance detection performance across different object scales and categories.

In this study, extensive experiments were conducted using diverse datasets, including commonly used benchmark datasets and custom datasets relevant to specific domains. The performance evaluation of the proposed approach demonstrates its superiority over existing object detection methods, showcasing significant improvements in terms of precision, recall, and mean average precision (mAP) metrics [5] [9]. Moreover, the paper explores the potential applications of the YOLOv4 algorithm, highlighting its effectiveness in real-time object detection scenarios. The algorithm's capability to achieve high processing speeds enables its deployment in resource-constrained environments, making it suitable for various practical applications [4]. The findings of this research contribute to the field of computer vision by presenting an advanced approach to object detection that enhances visual perception and achieves state-of-the-art results. The proposed method utilizing the YOLOv4 algorithm offers improved accuracy, efficiency, and real-time performance, making it a valuable tool for applications requiring robust object detection capabilities.

Keywords: Object detection, YOLOv4, Machine Learning, state-of-the-art results.


PDF | DOI: 10.17148/IARJSET.2023.10759

Open chat