Implementation of Autonomous Robot for Efficient Multitasking Operations
Main Article Content
Abstract
This study presents the design and implementation of an
autonomous mobile robot that integrates advanced AI-driven
vision, multi-sensor navigation, and mechanical object
manipulation into a unified, modular platform. Building on
prior research in autonomous systems, the proposed solution
leverages off-the-shelf components to achieve robust real-time
performance in dynamic environments. Key features include a
HuskyLens AI vision module and an ESP32-S3 vision module
for real-time object recognition, face detection, and wireless
communication; a four-channel line follower module combined
with an ultrasonic sensor for precise navigation and obstacle
avoidance; and a servo-actuated mechanical gripper for
accurate pick-and-place operations. The system is managed by
an Arduino Uno R3 enhanced with an expansion board, which
orchestrates data acquisition and control across the various
modules while enabling omnidirectional movement through
Mecanum wheels. Extensive simulations and field tests
demonstrate that the platform can maintain positioning
accuracies within ±5 mm and reliably execute complex
multitasking operations. This work not only validates the
integration of multiple sensor modalities and AI-based
decision-making into a cohesive autonomous system but also
highlights its potential applicability in industrial automation,
logistics, and smart service environments.