The future of cloud infrastructure means the traditional wide area network (WAN) needs to be architected and engineered differently to become a dynamic, automated, interconnection fabric between globally distributed data centre in the core and at the edge.

With the arrival of 5G networks and Industry 4.0, distributed cloud architectures are in the spotlight. Time-sensitive applications, such as industrial automation and autonomous vehicles, will use artificial intelligence (AI) and machine learning (ML) to interact within IoT ecosystems that use multi-edge computing (MEC) or ‘edge’ clouds to overcome the network latency and transport costs associated with physically distant ‘core’ clouds. In other words, the trend of pushing compute resources and applications to the edge is underway.

Until now, the WAN was mostly suitable for video streaming and SaaS-type applications. The need to support the dynamism of the cloud was mostly restricted to networks within the data centre itself, where virtualization and software-defined networking (SDN) became the technologies of choice. Although the WAN has gone through its own evolutionary curve, adapting IP to use multi-protocol label switching (MPLS), the underlying connectivity model has been relatively stable. Some cloud applications, such as video streaming, have required the WAN to support much greater bandwidths, although much of that pressure was offloaded to content delivery networks (CDNs).

With the adoption of more distributed cloud architectures, the WAN – and the MPLS protocol – need to adopt the networking principles developed for the data centre. These changes are reflected in the cloud-native architecture of 5G networks, but they are also critical to the re-design of IP and optical networks. Specifically, we need a cloud data-centre interconnect (DCI) fabric that is sufficiently robust, dynamic and scalable and uses both the optical and IP layers for the utmost efficiency.

For instance, traffic engineering for this cloud DCI fabric needs to be scalable and flexible to handle dynamic traffic patterns. It should be software programmable across network layers to support new services and applications with appropriate QoS. There are different ways to achieve this, but the most promising is using segment routing (SR) combined with SDN automation. In this way, the fabric can scale significantly because it does not require MPLS control plane signaling, nor does it require any changes to the MPLS data plane.

This allows a network to maintain state information only in edge devices, simplifying the requirements for core devices. This approach allows for a cost/performance optimized architecture with an affordably scalable core using routers based on merchant silicon with a more distributed, intelligent and feature rich edge using routers based on custom silicon.

This kind of DCI fabric, based on SR and SDN, can support more granular and scalable traffic engineering than existing mechanisms, such as RSVP, which rely on control plane signaling to communicate network state and set up network paths. Instead, source routing and network path computation element (PCE) servers maintain the network-wide segment routing topology, the state of active paths and reserved resources in a logically centralized traffic engineering database. This removes unnecessary state from the network elements.

The edge routers communicate with the PCE server to request path computation, and the SDN controller determines when and where to establish paths based on real-time traffic and application performance requirements. Thus, paths can be established, re-routed and brought down dynamically, providing a more agile, intelligent and automated network under SDN control and without the need for complexity in the core.

A DCI fabric with a single integrated IP/optical network with multi-layer SDN automation and control, reduces operational costs and improves operational efficiency for cloud operators. It enables multi-layer discovery and visibility that simplifies operations and reduces operational costs by enabling easier and more efficient troubleshooting across IP and optical layers.

Figure 1. Scalability, flexibility and programmability provided by DCI fabric.

It can also enable more efficient use of IP and optical resources, for example by automating the setup of dynamic optical services with resiliency, increasing traffic flow protection and utilization of router ports by using ECMP. It also allows for multi-layer traffic engineering for improved resiliency and latency for new edge cloud applications, including:

  • Ensuring optical link diversity for IP routing
  • Providing comprehensive correlation of network topology for latency or other performance criteria
  • Implementing a multi-layer routing and protection strategy to optimize IP and optical networking synergies, and forward and protect traffic at the most economical layer

The shift to distributed cloud architectures especially favors colocation data centre providers. Many enterprises, service and cloud providers choose to collocate in these facilities, where they have the required proximity to end users and applications. Armed with a DCI fabric using segment routing and SDN to interconnect their global data centres, they are ideally placed to implement very large-scale edge cloud infrastructure to meet the performance and latency characteristics for new applications such as private 5G, IoT, AI and Industry 4.0. And by significantly reducing operational costs, this data centre interconnect fabric provides colocation providers with a cost-effective and competitive platform for enabling the delivery of next generation cloud-based applications and services.

Photo by Martin Adams on Unsplash

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here