Multi-agent Deep Reinforcement Learning-based Task Off loading and Resource Allocation in Fog Computing

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Fog computing presents a significant paradigm for extending the computational capabilities of resource-constrained devices executing increasingly complex applications. This approach is applicable to real-time and latency sensitive smart user devices such as consumer electronics (CE), Internet of Things (IoT), TinyML, unmanned air vehicles (UAV), mobile devices, and others are in high demand for resources to process their task according to QoS objectives. However, effectively leveraging this potential critically depends on the implementation of efficient task off loading mechanisms to proximal fog nodes, particularly under conditions of high resource contention. Recent advances in fog computing have enabled decentralized task off loading from resource-constrained smart devices to resource-rich fog nodes. However, determining optimal task placement and resource allocation across distributed, dynamic, and resource-limited fog environments remains a major challenge. The problem scales largely especially when striving to meet stringent Quality of Service (QoS) requirements. Moreover, the existing task off loading model in distributed fog face exhaustive search for selecting right fog node leads to a prolonged decision time problem. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, o ering adaptive, data-driven decision-making in real-time and uncertain conditions. However, cooperation between fog nodes and dynamic partial off loading model not fully explores in the existing DRL based off loading models. Following that, this work presents a comprehensive and focused analysis on the full-scale application of DRL to the task off loading problem in fog computing environments involving multiple user devices and multiple fog nodes. To address this challenge, we introduce multi-agent fully cooperative partial task off loading and resource allocation(MAFCPTORA) decentralized model for cooperative task off loading and resource allocation. The main contributions of this dissertation include: (i) a decentralized multi-agent DRL architecture for horizontal fog-to-fog off loading, (ii) a cooperative reward function is formulated that optimizes both latency and energy, and (iii) an enhanced evaluation environment for parallel off loading scenarios. The simulation is conducted in four DRL baseline algorithms such as IDDPG, PPO, SAC, and TD3. TD3 outperform the other three approach, then TD3 algorithm modified to enable MAFCPTORA parallel task execution. The performance of TD3 base MAFCPTORA are evaluated and compared it against recent baseline approaches. MAFCPTORA demonstrated superior performance compared to baseline methods, achieving a significantly higher average reward (0.36 0.01), substantially lower average latency (0.08 reduced energy consumption (0.76 0.14).

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By