It is no secret that cloud adoption is growing at a fast rate. Industry experts, like Gartner, estimate the cloud infrastructure as-a-service (IaaS) market to grow to $76.6B by 2022. While making the transition, customers are often adopting multiple public clouds, in fact, more than 81% of respondents in a Gartner survey are saying that they use two or more public cloud providers, proving that multi-cloud is real.
In this journey to the public cloud, organizations connect their cloud workloads and services from their on-premise infrastructure using the public Internet or dedicated connectivity. Many enterprise customers typically choose the option of dedicated connectivity to access their mission critical applications. These are the enterprises who have either one or a combination of the below requirements:
- Consume MPLS in their on-premise infrastructure and want to extend the same to cloud
- Need higher bandwidth throughput for applications working with large data sets
- Desire lower latency for applications using real time feeds
- Desire tighter SLAs for better user experience for public cloud workloads
Each of the public cloud providers have their own take on dedicated connectivity options:
- AWS Direct Connect
- Microsoft Azure Expressroute
- Google Cloud Dedicated Interconnect
- Oracle Cloud Infrastructure FastConnect
Every public cloud provider has varying bandwidth options for the dedicated connectivity ranging from 50Mbps to 10Gbps, and as high as 100Gbps for connecting with Google Cloud.
These dedicated connectivity options are provided by the cloud providers in partnership with colocation providers like Equinix, Coresite, Digital Realty, etc. The colocation facilities are acting as the new edge for the cloud network. Enterprises extend their networks to a colocation facility where they are interconnected into the cloud provider’s network by virtue of one of the above dedicated connectivity options.
Today, we will take the example of AWS Direct Connect (DX) as a dedicated connectivity option. AWS DX since its launch in 2012 has been a popular service that enterprises use to connect their on-premise infrastructure with one or more dedicated connections of 1Gbps or 10Gbps. The DX connection provides enterprises with a Layer 3 virtual interface (VIF), which extends from the colocation facility into AWS. Enterprises terminate the DX connection onto their VPCs using a private virtual interface or a public virtual interface to connect to AWS public services, such as Amazon S3 or Amazon Kinesis. Frequently, enterprises grow their cloud footprint quickly and with multiple VPCs. Consequently, they run into the challenge of extending the AWS DX into each of the VPCs. It might not be a very scalable option to have the private VIF connection to each VPC. To mitigate scalability challenges, AWS introduced the concept of Transit Gateway (TGW) with the ability to connect a new type of virtual interface called Transit VIF on one end and attach the VPCs on the other. The Transit VIF can be used in combination with a Direct Connect Gateway (DCG), which is attached to a Transit Gateway.
The Transit Gateway design however has its own set of limitations and restrictions, as listed below:
- Each Direct Connect connection represents a single Transit VIF
- Administrators need to manually enter the CIDR ranges of the VPCs on TGW for advertising to the DCG, which will then be advertised to the on-premise routers
- Maximum number of routes advertised from TGW to on-premise routers is limited to 20
- Maximum number of routes advertised from on-premise routers to cloud is limited to 100
Furthermore, as I indicated earlier, most enterprises won’t stay restricted to a single public cloud provider, but would rather distribute their workloads across AWS, Azure and GCP. While they extend connectivity into the multi-cloud environment, enterprises are also concerned about securing the traffic to and from the on-premise network, as well as across the cloud instances.