Distributed Learning for Application Placement at the Edge Minimizing Active Nodes

Abstract

The main goal of application placement in Multi-Access Edge Computing (MEC) is to map their requirements to the infrastructure for desired Service Level Agreement (SLA). In highly distributed infrastructures in beyond 5G and 6G networks, meeting this need and minimizing energy use are crucial. Focusing solely on meeting SLA requirements can lead to resource fragmentation and reduced energy efficiency, as nodes utilize only a small portion of their resources. Furthermore, when multiple orchestrators govern MEC nodes, achieving optimal efficiency becomes a more complex challenge. This paper addresses the application placement problem by employing distributed deep reinforcement learning to efficiently minimize the overall cost of active MEC nodes in a distributed scenario involving multiple MEC systems. Our technique reduces the number of active nodes maintaining an average accuracy of up to 98%, meets SLA requirements, and is scalable for hosting several MEC nodes.

Publication
Proc. of IEEE 6GNet