Introduction

With the rapid adoption of multi-cloud strategies, organizations require efficient compute instances on every platform on which they deploy their high-performance workloads to achieve a more consistent and predictable level of service across cloud substrates. The additional challenge of connecting these high-performance workloads across clouds can be burdensome and costly due to the nuanced particulars of each platform’s direct connectivity implementations, SD-WAN fabric integrations, and disparate security solutions.

Intel solves the first piece of this puzzle by providing various high-performance instance types on each Cloud Service Provider (CSP) to meet the latency and network throughput demands of modern, multi-cloud workloads. These instances include the network optimized AWS EC2 C6in instance, running on 3rd Gen Intel® Xeon® Scalable processors (Ice Lake), the GCP C3 high performance computing machine series and the AWS R7iz instances which are built upon 4th Gen Intel® Xeon® processors (Sapphire Rapids).

The challenge of connectivity is met by the Alkira Cloud Services Exchange. Running on Intel® processors, it is the first unified multi-cloud network delivered as a service. Alkira, an Intel Network Builder partner, enables low-latency and high bandwidth connectivity to workloads distributed amongst on premise datacenters and across multiple clouds with a unified interface and at a fraction of the complexity normally associated with homegrown multi-cloud networking.

The purpose of this blog post is to show that connecting cloud workloads running on Intel® Xeon® Scalable processors using Alkira’s multi-cloud network fabric can enable the optimal cloud network compute, low latency, and high-bandwidth necessary for any multi-cloud deployment strategy.

High Performance in the Multi-Cloud

Intel Cloud Instances

Modern cloud strategies distribute workloads across CSPs and private datacenters. Many times, the network latency and throughput requirements of these applications demand high performance multi-cloud networking. Intel continues to be a leader in high-performance network compute instances across CSPs with, for example, the c6in family of AWS EC2 instances which deliver up to 200Gbps of network throughput or the GCP C3 machine series that similarly offers up to 200Gbps of throughput. These instances are ideal for compute, networking, and data intensive tasks such as high-performance web serving, CPU AI/ML workloads, and software-based networking appliances.

Cloud instances based on Intel Xeon Scalable processors accelerate multi-cloud workloads with key technologies such as Intel® Advanced Matrix Extension (AMX) which is designed to accelerate the AI capabilities of multi-cloud solutions, and Intel AVX-512, the latest SIMD instruction set which boosts the performance of deep learning, cryptography, and compression dependent workloads.

“Good-Better-Best” High Performance Multi-Cloud Service Connectivity

Yet, to take full advantage of these instance types in the distributed multi-cloud, it must be easy to achieve high-performance network connectivity between multiple CSPs and on-premises datacenters.

Alkira Multi Cloud Networking

Alkira’s Multi-Cloud Network fabric is built upon a resilient and high throughput backbone that seamlessly integrates and abstracts over all CSP networking options including direct connectivity implementations, VPC details, and individual security policy management planes. This provides a standard way of networking across all CSPs and a unified view of multi-cloud networks. Furthermore, the distributed architecture of Alkira’s multi-cloud network service allows on-premises datacenters and public clouds to connect to Alkira nearest point-of-presences (POPs) called Cloud Exchange Points (CXPs) to achieve a low latency connection to Alkira’s network. Applications running on Intel cloud instances that connect through CXPs, enroute to Alkira’s fast wide-area backbone, benefit from the low latency and high bandwidth that is expected when connecting high performance multi-cloud workloads.

In addition, Alkira Cloud Services Exchange provides advanced traffic management techniques such as symmetric traffic steering, and intelligent routing. Running on Intel® Xeon® Scalable processors, Alkira Cloud Services Exchange allows easy, web UI-based network design and “one-click” provisioning of multi-cloud networks on top of a highly available hyperscale infrastructure. Intel-based cloud instances and the Alkira Cloud Services Exchange form a synergy that enables seamless multi-cloud connectivity with high-performance.

Workload-Centric Instance Characterization and Network Operational Visibility

Alkira Cloud Services Exchange Portal

To get the best performance out of cloud instances and the networks that connect them it can be important to understand the precise characteristics of individual workloads and the traffic that flows between them. Alkira offers the Alkira Cloud Services Exchange Portal to provide end-to-end visibility of traffic flowing between sites and clouds as well as connection statuses and more from a single pane of glass.

Multi-Cloud Network Automation Toolkit

Intel provides improved visibility with the Multi Cloud Networking Toolkit (MCNAT). MCNAT allows multi-cloud developers and engineers to quickly understand the performance characteristics of their workloads across private and public cloud compute instances by enabling the easy best-practice configuration of cloud workloads along with the benchmark and telemetry infrastructure needed to measure and monitor performance. MCNAT can help engineers compare workload performance across instance types and sizes to facilitate capacity planning and instance selection in the multi-cloud environment.

Using open-source, multi-cloud development tools like Packer, Ansible, and Terraform, MCNAT codifies and encapsulates Intel architecture best practice configuration into repeatable deployment scenarios of reference workloads that users can customize to match their own implementations and deployment targets.

With MCNAT, users know that their benchmarks reflect the current best configuration for Intel architecture on any CSP and Intel instance that a reference workload scenario supports. This allows user to quickly and confidently scale supported workloads across CSPs, cloud instance types, and network configurations without having to first understand the complexities of each provider’s virtual network-IO subsystems and generational instance architectures.

Multi-Cloud Network Automation Toolkit Pipeline (MCNAT)

Summary

Deploying high-performance applications across the private and public multi-cloud is complicated and time consuming. The benefits of using Alkira’s Multi-Cloud Network together with Intel’s highly optimized cloud instances are as follows:

Alkira Cloud Services Exchange
  • Easy connectivity of on-premises networks as well as multiple cloud networks over a high-performance cloud backbone
  • Low latency by way of regional CXPs
  • Comprehensive networking capabilities for some of the most complex enterprise networking deployments
  • Standardized networking across CSPs and network services insertion for Firewalls and additional network services
  • Scalable Network Bandwidth
  • Simple to use Cloud Service Exchange Portal for network traffic visibility and telemetry
Intel® Xeon® Scalable processor Cloud Instances
  • High Max. network throughput
  • High CPU and network performance for latency sensitive workloads
  • Industry-leading workload accelerators
  • MCNAT toolkit for quickly selecting the right Intel Architecture-based instance type in any cloud
  • Available on all major cloud providers

About the Authors :    & 

Ramakanth Gunuganti is Alkira’s Chief Software Architect with over 20 years of experience in networking, security, platforms and cloud computing. He has a deep understanding of the internals of cloud networking and is passionate about leveraging that experience to solve enterprise cloud networking problems. Ramakanth joined Alkira from Microsoft where he was Principal Engineer involved in building Azure ExpressRoute Direct and scaling the service to 100Gbps. Prior to that, he was Director of Engineering at Cisco (Insieme) working on ACI and a Distinguished Engineer at Juniper Networks. He holds 6 patents and has a Masters degree in Computer Science from the University of Texas, San Antonio.

Eric Jones is a Cloud Software Architect with 15 years of experience planning, designing, and developing networked applications and end-to-end cloud computing solutions. His interests include the application of static analysis to network programming and the hardware acceleration of virtualized network functions.