Service Management in Dynamic Edge Environments

Abstract

Beyond 5G and 6G networks are foreseen to be highly dynamic. These are expected to support and accommodate temporary activities and leverage continuously changing infrastructures from extreme edge to cloud. In addition, the increasing demand for applications and data in these networks necessitates the use of geographically distributed Multi-access Edge Computing (MEC) to provide reliable services with low latency and energy consumption. Service management plays a crucial role in meeting this need. Research indicates widespread acceptance of Reinforcement Learning (RL) in this field due to its ability to model unforeseen scenarios. However, it is difficult for RL to handle exhaustive changes in the requirements, constraints and optimization objectives likely to occur in widely distributed networks. Therefore, the main objective of this research is to design service management approaches to handle changing services and infrastructures in dynamic distributed MEC systems, utilizing advanced RL methods such as Distributed Deep Reinforcement Learning (DDRL) and Meta Reinforcement Learning (MRL).

Publication
Proc. of EURO-PAR Conference