Full title—Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning
This paper focuses on optimizing offloading and scheduling these tasks, with an emphasis on task prioritization to maximize task completion within deadlines while minimizing latency and energy consumption across all priority levels.
We propose a prioritized Deep Q-Network (DQNP) that optimizes long-term rewards through a priority-scaled reward system for each priority level, guiding the deep reinforcement learning (DRL) agent to select optimal actions. The model dynamically adjusts task selection based on environmental conditions, such as prioritizing tasks with higher deadlines in poor channel states, ensuring balanced and efficient offloading across all priority levels.
Simulation results demonstrate that DQNP outperforms existing baseline algorithms, increasing task completion by 14%, particularly for high-priority tasks, while reducing energy consumption by 8% and maintaining similar latency. Additionally, the model mitigates resource starvation for lower-priority tasks, achieving task selection rates of 27%, 32%, and 42% for low-, medium-, and high-priority tasks, with completion ratios of 88%, 87%, and 86%, respectively, reflecting balanced resource allocation across priority classes.
Full Article: IEEE Transactions on Vehicular Technology, Early Access
|