Abstract: Signlanguage provides an essential means of expression for people with hearing and speaking disabilities. This paper focuses on the development of a reliable method for sign language gesture detection and identification using sophisticated machine learning techniques and computer vision. It uses a combination of techniques such as hand tracking, gesture recognition, and deep neural networks to identify signs coming from video input. Inherent features include real-time processing, adaptability to varied sign languages, and the interpretative ability of complex gestures such as static and dynamic movements. Creating comprehensive datasets of annotated sign language gestures was crucial for both training and evaluation. The results of experiments have shown very high accuracy and efficiency of signs recognition in different environments and various user profiles, which make it promising for real-world application. Thus, the system can contribute to bridging the gap in communication and accessibility both in public and private realms by making technology an instrument for inclusion.
| DOI: 10.17148/IARJSET.2024.111264