Abstract: The act of communicating or sharing information, knowledge, or emotions is defined as communication. All of the participants must understand and be able to comprehend a common language in order to identify communication between two or more persons. The methods of communication, however, differ in the case of dumb and deaf persons. The inability to hear is referred to as deafness, whereas deafness is the inability to communicate. They interact vocally among themselves and with others, but most people do not believe signing to be vital. Communication between a regular person and someone who is dumb or deaf is difficult since not everyone has linguistic skill or comprehension. One can create the model aided by machine learning to get beyond this obstacle. The model will be taught to recognise and translate various linguistic gestures. Many individuals will benefit from this while engaging and communicating with deaf and dumb persons. The single- and double-handed signs used in these Indian Sign Language Recognition systems are constructed utilizing machine learning methods, they are not, however, real-time. We present a method for building an Indian language dataset using a camera in this study. followed by the training of a TensorFlow model utilizing transfer learning to develop a real-time linguistic communication recognition system. Even with a little dataset, the algorithm manages to attain an honest degree of accuracy.


PDF | DOI: 10.17148/IARJSET.2022.96109

Open chat