Abstract: With the growing need for intelligent human-computer interaction systems, recognizing human emotions and interpreting sign language have become essential components in bridging communication gaps. This research presents a unified deep learning-based system that integrates both facial emotion recognition and sign language translation. The proposed model utilizes pre- trained VGG16 and VGG19 architectures to extract high-level spatial features from facial images and sign language gestures. For facial emotion recognition, the FER2013 dataset is used, and real- time emotion prediction is achieved using live webcam input. In parallel, sign language gestures are interpreted using the American Sign Language (ASL) dataset, where the temporal dynamics are captured and processed. The extracted features are used to train classifiers to enhance recognition accuracy. Experimental evaluations demonstrate the effectiveness of the combined approach, showing promising performance in accurately detecting emotions and translating sign gestures. This integrated system offers a valuable tool for enhancing communication, especially for individuals with speech impairments and in emotionally aware interactive systems.

Keywords: Facial Emotion Recognition, Sign Language Translation, VGG16 and VGG19, Deep Learning, Human-Computer Interaction and Real-Time Gesture Recognition


PDF | DOI: 10.17148/IARJSET.2025.12467

Open chat