Internet-Draft Use cases-Standalone Service ID January 2024
Huang, et al. Expires 1 August 2024 [Page]
Workgroup:
RTGWG
Internet-Draft:
draft-huang-rtgwg-us-standalone-sid-00
Published:
Intended Status:
Standards Track
Expires:
Authors:
D.H. Huang
ZTE Corporation
G.C. Chen
China Telecom
J.L. Liang
China Telecom
Y.Z. Zhang
China Unicom
F.Y. Yang
China Mobile
D.Y. Dong
Beijing Jiaotong University
Y.DY. Yuan
ZTE Corporation
F.HK. Fu
ZTE Corporation
C. Huang
ZTE Corporation
Y. Guo
ZTE Corporation

Use Cases-Standalone Service ID in Routing Network

Abstract

More and more emerging applications have raised the demand for establishing networking connections anywhere and anytime, alongside the availability of highly distributive any-cloud services. Such a demand motivates the need to efficiently interconnect heterogeneous entities, e.g., different domains of network and cloud owned by different providers, with the goal of reducing cost, e.g., overheads and end-to-end latency, while ensuring the overall performance satisfies the requirements of the applications. Considering that different network domains and cloud providers may adopt different types of technologies, the key of interconnection and efficient coordination is to employ a unified interface that can be understood by heterogeneous parties which could derive the consistent requirements of the same service and treat the service traffic appropriately by their proprietary policies and technologies.

This document provides use cases and problem statements from two main Internet traffic categories: one is the traditional north-south traffic which is defined from clients to entities (such as servers or DCs), and the other is east-west traffic which refers to traffic between entities (such as inter-server or inter-service).The requirements for a standalone Service ID are also derived.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 1 August 2024.

Table of Contents

1. Introduction

The Internet application paradigm has been witnessing fundamental changes and will have profound impacts on the two most basic Internet aspects: traffic engineering and traffic steering.

First, due to the fast and deep digital transformation across almost all sectors of industry and society, more and more applications are rapidly shifting from simply retrieving well processed content from central servers and data centers (i.e., C/S model), into delivering multi-type computing tasks to/from cloud infrastructures, such as 3D virtual reality, picture recognition, autonomous driving and AI-model training, etc. The majority type of traffic on the Internet has therefore been transformed from the single best-effort flow to the multi-tasking flows with diversified and stringent performance requirements, for example, time-critical tasks demanding a deterministic end-to-end delay, large user-generated raw data requiring steady high transfer rate, and compute-intensive tasks calling for scheduled CPU cycles or destined to heterogeneous type of processors (e.g., CPU or GPU), etc. So, this will inevitably bring challenges to the methodologies and processes of current Internet traffic engineering, in which the traffic classification is not sufficient to identify and guarantee SLAs of multi-tasking flows, and the network scheduling is not adaptive to the transient life cycle of computing tasks.

Second, the whole cloud computing ecosystem is rapidly evolving to the cloud native paradigm. Compared to the traditional monolithic designing patterns, cloud native applications are essentially designed and developed as distributed and connected micro-services, with each performing an independent piece of functionality. Because each micro-service encloses all its required computing resources and logic codes in a lightweight container, it is very natural and efficient to deploy and manage replicas of services in a highly distributive and scalable way via the network. Given the stringent performance requirements, critical computing tasks should be offloaded to the maximum proximity of user-generated data for the fastest responsiveness and real-time processing. So, cloud resources residing in the central cloud, edge cloud, and communication nodes, such as base stations, vehicles, or even handheld devices, from different cloud service providers (CSPs), could be considered as ubiquitous cloud infrastructure to host service instances through the Internet. This will put great challenges on the protocols and mechanisms of Internet traffic engineering, in which the computing tasks and life cycle should be taken into account. As for the internet traffic steering, in which the current addressing mechanism is inherently designed in terms of the host as well as the specific resources, however, the aforementioned service and the connections with it would be independent of both specific host and resources. The same service could be deployed at multiple hosts on multiple sites upon different cloud resources, the service is actually rendered to be a standalone entity accessed and connected by its counterpart.

Considering the heterogeneous resources and technologies of different entities i.e., network domains and cloud providers, use cases and problem statements are presented from two main Internet traffic categories. One is the traditional north-south traffic which is defined from clients to entities (such as servers or DCs). The other is east-west traffic which refers to traffic between entities (such as inter-server or inter-service). Usually in most edge computing scenarios, resources are insufficient and heterogeneous due to cost and energy limitations. So, instead of deploying the application with its full functionalities, it is the service instance that has smaller resource consumptions and a shorter lifetime that fits well for these scenarios. Moreover, as for the frequent mobility of clients and fast service migration, the east-west traffic between edge cloud and central cloud, and among edge sites (such as inter-service communications), will significantly increase and greatly impact the SLA experienced by clients from the point view of application as a whole.

It is noticed that many technologies have been proposed in the community for appropriately handling north-south traffic and the examples are CATS and APN which have proposed frameworks and the associated service identification mechanisms in the defined scenarios. As far as east-west traffic among multiple network or cloud domains is concerned, it is generally sent through tunnels using existing approaches such as network service mesh, and as is explained in the following sections, such approaches cause large end-to-end delay and high complexity especially when multi-hop inter-service communications are involved.

There are either complicated barriers among different network and cloud entities with regard to service connections or no interface between the networking sensitive service and the underlying network for either north-south traffic or east-west traffic. Therefore it is critical to employ a unified interface and entity that could be accessed by multiple network or cloud platforms. Leveraging this unified entity which is temporarily called service ID, addressing and networking among heterogeneous network domains and cloud providers with consistent semantics and service SLA guarantees could be accomplished by establishing the mapping between the unified service ID and the specific technologies used by a network domain or a cloud provider.

2. Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

3. Terminology

4. Use Case Scenarios

4.1. North-South Traffic

In the following section, three typical use case scenarios, i.e., network resources provisioning, fine-grained traffic steering among multiple service instances, and observability of end-to-end services, are described for conventional circumstances of north-south traffic. For each use case, problems as well as gap analysis will be demonstrated.

4.1.1. Network Resources Provisioning and Differentiated Network Services

The Internet has long been used as a pipe to deliver the north-south traffic from a client to a server. However, with the stringent requirements raised by newly emerging applications such as XR and AI-based services, the requirements of introducing the application-related information into the network and providing differentiated and fine-granular treatment for a variety of distinctive applications and services which require network capabilities exceeding Best Effort have been increasing. To this end, application-aware networking (APN) has been proposed to solve the problem in [I-D.li-rtgwg-apn-app-side-framework]. Nevertheless, fine-grained awareness could be extended into sub-service of application for which the conventional 5 tuples would be hard to identify.

4.1.2. Traffic Steering among Multiple Service Instances

The current Internet adopts a fixed IP address interconnection mode, and when a service is invoked, it is generally required to first query the IP address corresponding to the service name through DNS. Within a possible ubiquitous distributed service scenario, the efficiency of querying IP addresses through DNS would be relatively low for often requiring multiple redirects to find the most appropriate service instance, which cannot meet the fast response needs of high interaction applications such as AR/VR and autonomous driving. In addition, when the client moves or the server migrates, the traffic flow will initiate a new handshake process, which cannot meet the performance requirements of access agility and being always online.

Also, it is well acknowledged that for some distributed services, the performance experienced by clients depends not only on network metrics such as bandwidth and latency but also on cloud metrics such as processing, storage capabilities, and capacity. Computing aware traffic steering (CATS) working group is working on this. For cases mentioned in [I-D.ietf-cats-usecases-requirements], Cloud VR/AR, for instance, brings challenging requirements to both network and computing. Correspondingly, CATS enables distributed edge nodes to serve a service request with carefully selected instances which guarantees it has sufficient computing resources and ensured network path to meet the end-to-end delay requirements by introducing computing metrics into the conventional routing scheme. Other use case scenarios included in [I-D.ietf-cats-usecases-requirements] involve computing-aware intelligent transportation, digital twin, SD-WAN, etc. CATS will also provide agile service access and enhance the utilization of network and computing resources.

Compared to IP-based addressing and routing schemes, service-based interconnection and routing would be more aligned with the essential demands and expectations of clients and services. Thus, it would be instructive to identify and indicate services in the interconnected network to steer and guarantee north-south traffic.

4.1.3. Observability of End-to-End Services

To locate performance bottlenecks, prevent failures, and improve resource utilization efficiency, many technologies have been developed for measuring the traffic and monitoring the network status. Examples of these technologies include bidirectional forwarding detection (BFD), in-band network telemetry (INT), and packet loss and delay management for MPLS. However, the primary goal of such technologies is facilitating network operators to conduct operations, administrations, and maintenance (OAM), so these technologies generally collect statistics associated with a link or a path in a network domain, but fail to provide measurements of an end-to-end connection from a client to a server. In fact, due to a lack of service semantics, extensive efforts are required to correlate the statistics of observed metrics with specific applications and services. Furthermore, relevant schemes, e.g., ping and keepalive, do exist and provide the client-to-server end-to-end detection while such schemes generally work at the application level and fail to provide information and guidance for lower layers, e.g., L3 and L4. Consequently, it remains a problem to achieve whole-stack observability for the north-south traffic of end-to-end services.

4.2. East-West Traffic

In the following, three typical use cases, i.e., connectivity, scheduling, and observability, of east-west traffic will be described, taking into account the evolution of architecture from one cloud to multi-cloud and even multi-CSP. For each use case, the burdening complexity and the hinderances for service performance with existing methods are demonstrated.

4.2.1. Connectivity

In the service mesh architecture, inter-service communication is generally conducted using sidecar proxies that are collocated with service pods. A sidecar proxy intercepts all incoming traffic of the collocated service pod and conducts appropriate processing, such as service discovery, routing, or rate limiting. After receiving packets processed by the service pod, the sidecar proxy sends the packets to the next-hop sidecar proxy. The selection of the next-hop node may be subject to various criteria such as load balancing. Consequently, it is essential to consider the application semantics in the selection, which makes the communication between adjacent sidecar proxies with layer-7 protocols such as gRPC, or REST API.

When the service mesh architecture extends from one cloud domain to multi-cloud domains, gateways are needed in inter-service communications. These gateways are generally connected, e.g., by tunnels, to ensure they are reachable to each other. As shown below, micro-service A is located in cloud domain A, and micro-service B is located in cloud domain B. Both cloud domains can be owned by the same CSP or even different CSPs. Cloud gateways A and B are deployed for accessing the micro-services in their internal domain, respectively. Consequently, the communication between micro-services A and B consists of three TCP segments, i.e., from sidecar proxy A to cloud gateway A, from sidecar proxy B to cloud gateway B, and between gateways A and B.


        |Incoming
        |traffic
+-------v--------+             +-----------------+
| cloud GW A     |------------>|  cloud GW B     |
+----|------^----+ Tunnels etc +--------|--------+
     |      |                           |
+----v------|----+             +--------v--------+
| sidecar proxy A|             | sidecar proxy B |
+----|------^----+             +--------|--------+
     |      |                           |
+----v------|----+             +--------v--------+
| microservice A |             |  microservice B |
+----------------+             +-----------------+
  cloud domain A                  cloud domain B

Figure 1: Inter-service communication within multi-domains

It can be seen that each hop inter-service communication includes processing delay at the two gateways of both cloud domains for operations such as encapsulation and decapsulation, service discovery, routing, and load balancing. When the two domains adopt different technologies such as cloud topology and routing protocols, the two gateways further serve as interfaces and establish appropriate mapping between the different technologies. If an application requires multi-hop inter-service communications, e.g., switching to other CSPs for failure recovery, extending micro-services chains for performance promotion, or supporting service continuity of a moving client, the complexity of managing the application is tremendous and the end-to-end delay of the application would always exceed the maximum tolerable latency requirements.

4.2.2. Scheduling

4.2.2.1. Traffic Steering for Inter-cloud Optimal Selection

Services from remote clusters should be created locally via current existing APIs, ServiceEntries for instance, in a name.namespace.global format. DNS resolution is provided by CoreDNS. With a remote service registered, the sidecar of an application pod redirects traffic to the cluster IP and steers it to the remote gateway. Remote gateways among various clusters require fundamental connectivity via LAN or WAN. As for local entities to determine a preferred remote endpoint, strategies would be implemented as instructed by definitions and configurations in VirtualServices and DestinationRules.


(Configured in ServiceEntry)
Service B:
Cloud 1 Gateway                     Cloud 2
Cloud 2 Gateway                 +-------------+
                                |             |
(Configured in VirtualService)  |             |
Match TAG I:                +-------+     --  |
Cloud 1 Gateway       +---->|Gateway|--->(  ) |
Match TAG II:        /      +-------+     --  |
Cloud 2 Gateway     /           |   Service B |
                   /            |             |
                  /             +-------------+
+-------------+  /              +-------------+
|             | /               |             |
|             |/                |             |
|  --     +-------+         +-------+     --  |
| (  )--->|Gateway|-------->|Gateway|--->(  ) |
|  --     +-------+         +-------+     --  |
| Service A   |                 |   Service B |
|             |                 |             |
+-------------+                 +-------------+
    Cloud 1                         Cloud 3

Figure 2: Inter-cluster communication within Istio network APIs

In order to be aware of and access multiple distributed service instances of remote services, remote gateways need to be registered as endpoints in the local cluster. Furthermore, the steering strategy is comparatively static. It is almost impossible to always rewrite configurations in ServiceEntries, VirtualServices, and DestinationRules. Thus, it is challenging to dynamically steer the traffic among different clusters and edge clouds, aiming to select an optimal or most appropriate endpoint. Network infrastructure connects edge gateways rather than services, making it unable to be aware of the information and requirements of the services it carries. Thus, it would be difficult to provide corresponding fine granular services provisioning.

The gaps and problems are concluded as follows:

  • Maintaining and managing dynamic remote service endpoints under the service name makes real-time maintenance and dynamic management difficult within a distributed manner.

  • Service endpoints and routing strategies are not aware of remote cluster capabilities and network status, making dynamic optimization and scheduling difficult.

4.2.2.2. Traffic Steering of Service Chains for End-to-end Requirements Satisfaction

The non-invasive grayscale release of service mesh has become a mature feature. However, in the context of the widespread adoption of cloud native applications, applications often no longer exist in the form of individual services but are decomposed into a series of cloud native micro-services deployed in various clusters, where there are chains among services in which micro-services jointly provide services to the customers.

Similar to the service function chain (SFC) defined in L3, the micro-service chain (MSC) is correspondingly proposed for conditions in L7. When there is an invoking link between micro-services, the grayscale release of services is often not limited to a single micro-service but requires environmental isolation and traffic control of the entire micro-service chain.


               +-------------+
               |     --      |
               |    (  )     |
               |     --      |
              /|  Service A  |\
             / +-------------+ \
            /          Cloud 1  \
|          v          ||         v           |
|   +-------------+   ||   +-------------+   |
|   |     --      |   ||   |     --      |   |
|   |    (  )     |   ||   |    (  )     |   |
|   |     --      |   ||   |     --      |   |
|   |  Service B  |   ||   |  Service B  |   |
|   +-------------+   ||   +-------------+   |
|          |Cloud 2   ||         | Cloud 4   |
|          v          ||         v           |
|   +-------------+   ||   +-------------+   |
|   |     --      |   ||   |     --      |   |
|   |    (  )     |   ||   |    (  )     |   |
|   |     --      |   ||   |     --      |   |
|   |  Service C  |   ||   |  Service C  |   |
|   +-------------+   ||   +-------------+   |
|           Cloud 3   ||           Cloud 5   |

        LANE 1                 LANE 2

Figure 3: Traffic lanes for micro-service chains

Distinguishing and isolating applications according to versions or other features into separate operating environments is defined as traffic lanes. Then, by configuring traffic lane rules, traffic could be steered into the target lane.

However, gaps may exist for which traffic lanes are required to be pre-configured and established. Traffic could be steered into respective traffic lanes according to specific tag fields or features. Specific service instances would be named when a traffic lane is configured. Though a loose mode for traffic lanes may be introduced in which instances in other lanes may not be named, but rather quote corresponding ones from the main lane, it still may not implement adjustments dynamically. Furthermore, separate service endpoints and network segments are not aware of a chain scheduling logic, making it difficult to ensure cross-service consistency and provisioning capabilities.

Thus, it would be beneficial to examine service performance and identify the end-to-end SLA requirements for a specific service or a target micro-service chain when correspondingly steering the traffic. Dealing with the possibility of traffic congestion and overload scenarios, dynamic scheduling capabilities of the network would be correspondingly utilized.

4.2.2.3. Dynamic Computing Workload Distribution and Computing Scaling

Serverless is a model based on cloud computing, which is a collection of FaaS (Function as a Service) and BaaS (Backend as a Service). CSPs host computing, storage, databases, and other computing-related resources, dynamically manage and allocate them, and then provide services to clients.

When a service request reaches the GW or load balancer of an edge cloud, a request process module queries the instance management module to determine whether there are available idle instances. When there is a sudden increase in service requests, the resource scheduling module will apply for expansion and create new instances.


                   +--------------------------------------------------+
                   |    +---------+      +----------+    +-----------+|
                   | +->| Request |----->|Instance  |--->|Resource   ||
                   |/   | Process |      |Management|    |Secheduling||
                   /    +---------+      +----------+    +-----------+|
+---------------+ /|         \ trigger Initiate |      Create |       |
| Gateway /     |/ |          +--------------+  |             |       |
| Load Balancer |  |                          \ |             |       |
+---------------+  |                           vv             |       |
        ^          |                        +--------+        |       |
        |          |                        |Instance|<-------+       |
        |          |                        | (new)  |                |
        |          |                        +--------+                |
        |          |                                                  |
        |          |    +--------+ +--------+ +--------+ +--------+   |
        |          |    |Instance| |Instance| |Instance| |Instance|   |
     Request       |    +--------+ +--------+ +--------+ +--------+   |
                   |                                                  |
                   |    +--------+ +--------+ +--------+ +--------+   |
                   |    |Instance| |Instance| |Instance| |Instance|   |
                   |    +--------+ +--------+ +--------+ +--------+   |
                   +--------------------------------------------------+

Figure 4: Serveless mode meeting service requests

However, the resources of a single resource pool or edge cloud are always limited. In the context of ubiquitous resources and services, service scheduling urgently requires identifying the expectations of a service request and perceiving the distribution and utilization of resources in order to dynamically schedule requests and achieve optimal workload distribution.

4.2.3. Observability

In the current service mesh, observability is generally conducted on a one-cloud basis using application performance management (APM) approaches such as distributed tracing. However, when the service mesh architecture moves from one cloud domain to multi-cloud domains, e.g., edge clouds that are distributed in different physical locations or multiple clouds that are owned by different CSPs, existing approaches have two major shortcomings. First, APM approaches conduct instrumentation to obtain measurements such as traces, metrics, and logs, which require the modification of codes and the re-publish of applications. However, in the multi-domain scenario, such an intrusive process either is not allowed or leads to maintenance difficulties such as conflicts of codes, which hinders the adoption of the APM approaches. Second, APM approaches focus on collecting statistics associated with the business logic and the micro-service framework, but fail to obtain measurements regarding the infrastructure such as system calls and network transmissions. This feature causes observation blind spots to service mesh in the multi-cloud scenario since inter-service communication generally goes through a complicated transmission path that consists of various gateways. As an example, one transmission path between two micro-services A and B may go through the pod, node, KVM virtual machines, and other infrastructure such as gateways.

To complement APM approaches, new technologies such as the enhanced Berkeley Packet Filter (eBPF) are introduced to collect statistics associated with the cloud-native infrastructure such as system calls, various gateways, and sidecars. The statistics are then aggregated to enable the calculation of performance metrics, e.g., RED (Request/Error/Delay), for the entire stack and facilitate distributed tracing through the correlation of call logs. However, since the byte streams collected by the eBPF technologies generally do not contain business semantics, it is difficult to conduct aggregation, especially in cross-thread communication and asynchronous invocation scenarios. Moreover, since the underlay network is invisible to the tunnel if a failure is detected in the underlay network, there is no way for the eBPF approach to correlate with the overlay tunnel and take action in a timely manner. Consequently, it remains a problem to achieve whole-stack observability for the east-west traffic of end-to-end services.

5. Requirements of Service Identification for Addressing and Networking

In this section, requirements of service identification for routing network have been identified based on the use cases.

6. Security Considerations

TBA.

7. Acknowledgements

TBA.

8. IANA Considerations

TBA.

9. Normative References

[I-D.ietf-cats-usecases-requirements]
Yao, K., Trossen, D., Boucadair, M., Contreras, L. M., Shi, H., Li, Y., Zhang, S., and Q. An, "Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements", Work in Progress, Internet-Draft, draft-ietf-cats-usecases-requirements-02, , <https://datatracker.ietf.org/doc/html/draft-ietf-cats-usecases-requirements-02>.
[I-D.li-rtgwg-apn-app-side-framework]
Li, Z. and S. Peng, "Extension of Application-aware Networking (APN) Framework for Application Side", Work in Progress, Internet-Draft, draft-li-rtgwg-apn-app-side-framework-00, , <https://datatracker.ietf.org/doc/html/draft-li-rtgwg-apn-app-side-framework-00>.
[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.

Authors' Addresses

Daniel Huang
ZTE Corporation
Nanjing
China
Ge Chen
China Telecom
Guangzhou
China
Jie Liang
China Telecom
Guangzhou
China
Yan Zhang
China Unicom
Beijing
China
Feng Yang
China Mobile
Beijing
China
Dong Yang
Beijing Jiaotong University
Beijing
Dongyu Yuan
ZTE Corporation
Nanjing
China
Huakai Fu
ZTE Corporation
Wuhan
China
Cheng Huang
ZTE Corporation
Shanghai
Yong Guo
ZTE Corporation
Shanghai