MULTILINGUAL DYNAMIC SIGN LANGUAGE SYMBOLIC TRANSCRIBE
Main Article Content
Abstract
This paper presents a Dynamic Symbolic Transcribe for Sign Language, a real-time translation system designed to
bridge the communication gap between individuals who are mute or hard of hearing and those unfamiliar with sign language.
Utilizing advanced Convolutional Neural Networks (CNNs) and Machine Learning (ML), the system facilitates dynamic
translation between American Sign Language (ASL), Indian Sign Language (ISL), and English. Unlike traditional sign language
translation tools, this system incorporates precise hand motion capture, real-time gesture recognition, and interactive
communication features to ensure high accuracy and seamless interaction. The translation engine provides text translation,
supports chat, video calls, and adaptive features for improved user engagement. The research focuses on developing a
multilingual, real-time, interactive platform that addresses the challenges faced by the mute and hard-of-hearing community in
accessing effective communication tools. With continuous improvements in gesture recognition accuracy through deep learning,
the system fosters inclusive communication, providing accessible solutions across diverse regions and languages. The paper
discusses the technological framework, model architecture, and system performance while exploring the potential for future
enhancements, including multilingual support and integration with assistive devices. Moreover, the project includes the
potential for integration with other assistive technologies, such as speech-to-text devices and AR equipment among others, to
provide a comprehensive communication assistant