Audiscan for Sign Language: Enhancing Communication Through Auditory-Visual Recognition Systems
Main Article Content
Abstract
The development of Audiscan for Sign Language represents a significant advancement in enhancing communication between deaf and hearing individuals. This study explores the integration of auditory-visual recognition systems to bridge the communication gap in real-time sign language interpretation. By utilizing machine learning algorithms, computer vision, and speech recognition technologies, Audiscan can detect and interpret sign language gestures, translating them into text or speech for better interaction. The system is designed to improve accessibility for individuals with hearing impairments by offering an intuitive interface that allows seamless communication in diverse settings, such as education, healthcare, and public services. This paper presents the design, functionality, and performance evaluation of Audiscan, highlighting its potential to transform communication in inclusive environments. Through this innovation, we aim to empower the deaf community, promote inclusivity, and foster greater understanding between sign language users and non-sign language speakers.