AI-Powered Assistive Navigation Systems for the Visually Impaired

Main Article Content

Manoj Bramhe
Paritosh Magare
Mayur Aglawe
Sayali Bamanpalliwar
Elasha Deoghare

Abstract

Vision impairment affects over 2.2 billion people worldwide, posing significant challenges to independent mobility and daily life. This review synthesizes recent advancements in AI-driven assistive technologies, encompassing lightweight object detection models, large vision–language models (LVLMs), embedded platforms, sensor fusion techniques, and user-centered interaction designs. The surveyed literature highlights recurring challenges, including limited contextual reasoning in lightweight models, dependency on network connectivity, privacy concerns, and the absence of long-term real-world evaluations. Building on these insights, we propose a practical server–client architecture in which a compact Raspberry Pi–based wearable integrates a camera module and ultrasonic sensor for local data capture and minimal preprocessing. Data is streamed to a local edge server or laptop that performs object detection, multisensor fusion for robust proximity alerts, and optional LVLM-based scene description with text-to-speech output. This design optimizes portability and battery life on the wearable while enabling high-fidelity, low-latency inference on the server, with an on-device fallback detector ensuring fail-safe operation during connectivity loss. Expected outcomes include improved detection accuracy (mAP in the ~75–90% range on optimized datasets), multi-fold latency reduction through offloading, reduced false positives via ultrasonic–vision fusion, and increased user trust through multimodal feedback. The discussion addresses potential limitations such as network dependency, power constraints, environmental sensing errors, LVLM computational demands, and privacy risks, alongside proposed mitigation strategies and evaluation metrics. The review concludes with a roadmap toward scalable, privacy-aware, and user-centered assistive navigation systems for the visually impaired. This paper reviews recent advances in AI-powered assistive navigation systems for the visually impaired, focusing on wearable and mobile platforms using computer vision, deep learning, and sensor fusion. It identifies common challenges such as network dependency, hardware limitations, and environmental sensitivity while outlining future research directions toward efficient, user-centered, and scalable assistive systems.

Article Details

How to Cite
Bramhe, M., Magare, P., Aglawe, M., Bamanpalliwar, S., & Deoghare, E. (2025). AI-Powered Assistive Navigation Systems for the Visually Impaired. International Journal on Advanced Computer Engineering and Communication Technology, 14(3s), 234–242. Retrieved from https://journals.mriindia.com/index.php/ijacect/article/view/1626
Section
Articles

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.