oai:arXiv.org:2407.01180
Computer Science
2024
7/3/2024
Future 6G networks are expected to heavily utilize machine learning capabilities in a wide variety of applications with features and benefits for both, the end user and the provider.
While the options for utilizing these technologies are almost endless, from the perspective of network architecture and standardized service, the deployment decisions on where to execute the AI-tasks are critical, especially when considering the dynamic and heterogeneous nature of processing and connectivity capability of 6G networks.
On the other hand, conceptual and standardization work is still in its infancy, as to how to categorizes ML applications in 6G landscapes; some of them are part of network management functions, some target the inference itself, while many others emphasize model training.
It is likely that future mobile services may all be in the AI domain, or combined with AI.
This work makes a case for the serverless computing paradigm to be used to this end.
We first provide an overview of different machine learning applications that are expected to be relevant in 6G networks.
We then create a set of general requirements for software engineering solutions executing these workloads from them and propose and implement a high-level edge-focused architecture to execute such tasks.
We then map the ML-serverless paradigm to the case study of 6G architecture and test the resulting performance experimentally for a machine learning application against a setup created in a more traditional, cloud-based manner.
Our results show that, while there is a trade-off in predictability of the response times and the accuracy, the achieved median accuracy in a 6G setup remains the same, while the median response time decreases by around 25% compared to the cloud setup.
;Comment: Submitted to https://ai-for-6g.com/
Michalke, Marc,Muonagor, Chukwuemeka,Jukan, Admela, 2024, Deploying AI-Based Applications with Serverless Computing in 6G Networks: An Experimental Study