Recent Advances in Strategy Design for Energy-Efficient Data Offloading in 6G-Enabled Vehicular Edge Computing Networks Using Double Deep Q-Network: A Systematic Review
Main Article Content
Abstract
The rapid evolution of 6G networks has intensified the demand for ultra-low latency, high reliability, and energy-efficient computation in vehicular edge computing (VEC) environments. Data offloading has emerged as a critical technique to manage computational workloads by transferring tasks from resource-constrained vehicles to edge servers or cloud infrastructures. However, the highly dynamic nature of vehicular networks, characterized by high mobility, fluctuating channel conditions, and heterogeneous resources, poses significant challenges in designing efficient offloading strategies. Recently, Double Deep Q-Network (DDQN)-based reinforcement learning approaches have gained attention due to their ability to address overestimation bias and improve decision stability in dynamic environments. This paper presents a systematic review of recent advances in energy-efficient data offloading strategies in 6G-enabled VEC systems, with a particular focus on DDQN-based optimization techniques. The study analyses state-of-the-art methodologies, system architectures, and performance metrics, including latency, energy consumption, and quality of service. Furthermore, the review highlights emerging trends such as multi-agent learning, hierarchical edge architectures, and mobility-aware optimization. The findings reveal that DDQN-based approaches significantly enhance energy efficiency and decision accuracy compared to conventional methods, making them a promising solution for next-generation vehicular networks.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.