Abstract: The interpretation of sign language into English has emerged as a critical research domain in Artificial Intelligence (AI) and Natural Language Processing (NLP), aiming to reduce communication barriers for the deaf and hard-of-hearing community. Sign languages are unique in their grammar, visual modality, and cultural variations, which makes translation into spoken or written languages highly complex. Recent developments in computer vision, deep learning, and large language models (LLMs) have significantly advanced recognition and translation capabilities. This review synthesizes 23 research studies published between 2021 and 2025, focusing on gloss-based methods, gloss-free architectures, and transformer-driven models such as SignBERT, SignBERT+, and Sign2GPT. A dedicated emphasis is given to Indian Sign Language (ISL), a low-resource language with limited datasets and benchmark systems. The paper also examines real-time mobile solutions, dataset availability, and ethical considerations in system design. Comparative analysis reveals that gloss-free transformer and LLM-based models outperform traditional methods but face challenges of computational cost and dataset scarcity. Finally, the review outlines key future directions, including large-scale ISL dataset creation, multilingual support, edge AI deployment, and inclusive co-design with deaf communities.
Keywords: Natural Language Processing (NLP), Deep Learning (DL), Machine Learning (ML), Computer Vision, Gloss Translation, SignBERT, Sign2GPT, Transformer Models
Downloads:
|
DOI:
10.17148/IARJSET.2025.121010
[1] Dr. Bharathi M P, Pavithra K V, Jamuna B H, "From Signs To Sentences: A Comprehensive Review On Sign Language Interpretation Through Natural Language Processing," International Advanced Research Journal in Science, Engineering and Technology (IARJSET), DOI: 10.17148/IARJSET.2025.121010