Abstract: People can interact and exchange ideas and sentiments through communication. For the deaf community to engage with the community, there are several hardships. The people converse with one another by using sign language. There is a technology that can translate sign languages into a comprehensible form to enable interaction with everyday people. This project aims to develop a real-time system that can translate Indian Sign Language (ISL) into text. The majority of the work is handcrafted. In this article, we provide a deep learning method that uses a convolutional neural network to classify signs. In the first stage, we create a classifier model that uses the numerical signs and the Python-based Keras convolutional neural network implementation. A second real-time system that used skin segmentation to locate the Region of Interest in the frame that displays the bounding box was used in phase two. To forecast the sign, the segmented region is fed into the classifier model. The system has a 99.56% accuracy rate for the same subject and a 97.26% accuracy rate in low light. With a varied background and image capture angle, the classifier was found to be improving The RGB camera system is the focus of our methodology.

Keywords: Deep Learning, Convolutional Neural Networks, real-time system, Computer Vision, Training, User Interaction, Indian Sign Language

 

 


PDF | DOI: 10.17148/IARJSET.2023.10448

Open chat