Home Web system Improve service mobility in the 5G edge cloud

Improve service mobility in the 5G edge cloud


What’s the best way to implement cloud-native edge services in 5G and beyond?

The edge cloud offloads computing from user devices with lower latency, higher bandwidth, and less network load than centralized cloud solutions. Services that will benefit hugely from edge cloud support in 5G and future 6G include extended reality, cloud gaming, and cooperative vehicle collision avoidance.

A challenge arises with the mobility we expect in mobile networks: how to move a service when the end user moves? When the device physically moves, the service must be moved again to the nearest edge cloud.

Device mobility has been supported by several generations of cellular networks – we can roam with our mobile devices and the network keeps calls and other services running. But now, with the edge cloud, server-side mobility is also needed.

Regardless of the actual intent or policy behind a service relocation, the service itself may or may not be stateful.

Stateful vs Stateless Services

In the digital world, stateless services can be implemented, for example, with serverless or Function-as-a-Service (FaaS) technologies. The relocation of these services can be managed, for example, using load balancers or, in the case of Kubernetes, which we are looking at here, ingress services. However, serverless services that can serve clients solely based on ephemeral input from the client are rare; even serverless services often need to store state in databases, message queues, or key-value stores. When relocating the service, its state should follow and be transferred close to the service, preferably in a vendor-neutral way. Otherwise, the service may encounter, for example, unexpected latencies when trying to access its state.

While stateless services are ideal in the cloud-native philosophy, some legacy stateful services might be too expensive to rewrite to be stateless, or some applications might simply perform better statefully. In such a case, service relocation could be handled with container migration in the case of Kubernetes. The advantage of such a scheme is that it works with unmodified applications. The main disadvantage is that existing connections based on the famous Transport Control Protocol (TCP) can break because the network stack is not transferred with the container. This can lead to service interruptions which may even be noticeable in the terminal.

Implementing cloud-native edge services

Our approach is an attempt to balance the various constraints: the application is allowed to be stateful but must be able to push its state into a database and restore it. The rest is handled by the underlying framework. But what would such a system look like?

Figure 1: The system architecture of the proposed prototype implementation

Before explaining how the system works, let’s first focus on what the system is supposed to accomplish. The illustration above shows four different clouds, each represented as a Kubernetes cluster with the host cluster at the top managing three edge clusters shown at the bottom of the figure. The goal is to be able to move or relocate a stateful server-side application (gRPC server pod 1) from the second cluster to the third cluster without the application losing state. To quantify how well the system avoids downtime during relocation, the server-side application is connected to a test application (gRPC client pod 1, located in the first cluster) that continuously measures latency to the server pod and sends the measurements to the server which the server stores as its “state”. The challenge here is that this state must remain intact when the system moves the server pod across cluster boundaries. Also, how can this be achieved with minimized downtime?

Making up Objective
User interface (UI) Web user interface that can be used to view topology
KubeMQ Publish-subscribe service to facilitate signaling between system components
Service mobility controller Orchestrates the running server pod relocation process and its status
federator An optional wrapper for KubeFed to make it easier to (un)join a cluster to the federation
Kube Fed Federated Kubernetes supports launching and stopping workloads in a multi-cluster environment
API K8s Unmodified K8s API available in each cluster
Agent K8 Observes the status of pods (e.g. “running” or “completed”) and reports to the service mobility controller
Application The actual workload or application running in a container. Client and server applications communicate via gRPC
Sidecar A system container based on the Network Service Mesh (NSM) framework, running in the same pod as the application. Connectivity between applications is managed by NSM.
gRPC Client/Server Module The pod hosting the gRPC client or server application
Database (DB) Redis-based in-memory key-value store used to store latency metrics on the server pod

Figure 2: Description of the purpose of the individual components of the prototype

How does the proposed solution work? When the server-side pod needs to be moved, the Service Mobility Controller (SMC) launches replicas of the server-side pod, including the database, to cluster 3. Then the SMC starts synchronizing the database replica cluster 3 with the data in cluster 2. When the database synchronization is almost complete, the SMC temporarily blocks the server-side pod until the database synchronization is complete. After that, the SMC instructs the test client to re-establish communications with the new server module. Finally, the SMC then removes unused resources from cluster 2.

Performance of a service mobility prototype

We evaluated the performance of the prototype from the point of view of service interruptions as shown in the figure below. The x-axis shows how often (every x milliseconds) the gRPC client was measuring latency. The y-axis shows how many times the gRPC client had to retransmit data when relocating the gRPC server (the green bar) and the standard deviation of ten tests (the error bar).

Evaluation of prototype performance based on service interruptions

Figure 3: Prototype Performance Rating Based on Outages

In the figure above, the leftmost bar shows that the gRPC client needed to retransmit 3.5 times on average when the client measured latency every 30 milliseconds. Towards the right side of the figure, the number of retransmissions decreases to a single retransmission at latency measurement intervals of 90 and 100 milliseconds. It should be noted that no packets are dropped because gRPC uses reliable TCP as the transport medium. The measurement environment was also challenging as Kubernetes was running on virtual machines in an OpenStack environment that was also running other workloads, and the link throughput was limited to 1 Gbps.

Based on our evaluation with the prototype, we believe it is possible to support relocatable and stateful services in a multi-cloud environment. Additionally, this can be achieved natively in the cloud and optimized for the underlying application framework to minimize downtime. We believe that the proposed solution for Kubernetes could be used to implement relocatable and non-disruptive third-party services within the 3GPP edge computing architecture, for the application context relocation procedure in specification 23.558, to be more exact. Additionally, the edge computing architecture with support for service mobility could be used as a building block in different scenarios, such as the mentioned extended reality, cloud gaming, and cooperative threat avoidance use cases. collisions with vehicles.

In search of next-gen cloud-native applications

The results presented in this article are preliminary and require further analysis. The prototype can still be optimized and could also be compared to the migration of containers. Our work is a complementary solution for migration, not a competing solution; one could use the one that best suits the service in question. In container migration, the application is unaware of the migration, whereas in our approach, the application is aware of its relocation and state transfer procedures, even though some parts are hidden from view. application.

We’ve barely scratched the surface with our prototyping efforts; This raises the question of how the next generation of cloud-native applications are written and what is the shared responsibility between the application, the cloud integration framework, and the underlying cloud platform?

Learn more

Visit Ericsson’s Edge Computing pages to explore the latest edge trends, opportunities and information.

Discover how edge exposure can add value beyond connectivity in the 5G ecosystem.

Learn about our previous fieldwork on multi-cloud connectivity for Kubernetes in 5G.