Abstract: Voice and language are the primary ways that people communicate. Because we have the ability to hear, we can communicate thoughts to one another. Even now, we may still give commands thanks to speech recognition. What happens, though, if a person becomes entirely deaf and finally mute? Since sign language is the main form of communication for deaf and mute people, much research into the automatic interpretation of sign language must be done in order to maintain their independence. Machine learning and image processing have assisted in the development of numerous methodologies and algorithms in this area. Every system that understands sign language has undergone training to recognize the signs and convert them into the required patterns. In this article, sign language is captured as a series of edited photos. Python was used, and the results were afterwards translated to text and audio.

Keywords: Sign language translation, Convolutional Neural Network (CNN), Image processing, Hand gesture.


PDF | DOI: 10.17148/IARJSET.2023.10435

Open chat