Where Network Meets the Cloud
Atif Khan and Dan Cruz
Oct 22 2020 | 48 mins
Networking is on the move. Compute, storage and applications are all now born in the cloud and consumed as a service. The network, however, has remained behind, mired in the complexity and cost of do-it-yourself approaches. Until now!
Join Atif Khan, CTO & Founder of Alkira and Dan Cruz, Network Architect at Koch Industries, an American multinational corporation, and the second largest privately held company in the United States, with over 130,000 employees and an annual revenue in excess of $115B (according to Forbes). Koch subsidiaries are involved in energy, automotive components, forest and consumer products, ranching, glass, investing and other industries.
In this interactive webinar Dan will discuss Koch’s journey to a modern network that operates in full synergy with a cloud. You will learn:
- What challenges Koch Industries has faced on its cloud adoption journey
- Why it has chosen an Alkira solution over hardware-based or do-it-yourself (DIY) alternatives
- How it has realized its vision of as-a-service network delivery for a variety of cloud deployment use cases
Leveraging Alkira’s solution, Koch was able to achieve unprecedented agility by expanding its cloud infrastructure from a single cloud to a multi-cloud in 1 day instead of months.
Speakers:
Atif Khan, CTO & Founder, Alkira
Dan Cruz, Network Architect, Koch Industries
Webinar Transcript
Hello and welcome to this webinar – this exciting webinar. My name is Syed Ali. I am very fortunate to be here with Atif Khan and Dan Cruz. We have a great topic and a lot to cover, today. Atif Khan is a CTO and co-founder at Alkira; and Dan Cruz is a network architect at Koch Industries. Dan Cruz has been doing networking for over 20 years, now. Five years ago we met at Koch Industries. Dan has lots of multiple accomplishments at Koch Industries. He’s deployed global entity [unintelligible 00:00:44] 650 sites. He designed and deployed Koch’s datacenter backbone. Designed and deployed Koch’s first [unintelligible], as well. We’ve had the luxury of working with him. Atif, did you want to add anything?
Thank you, Syed. Danny, it is a pleasure to have you on this webinar with us. I can start with giving a brief history of how we met Dan for our audience. It was I think – Dan, correct me anytime if I’m mistaken. It was early to mid-2015 when Koch started evaluating different SD-WAN vendors, and Viptela was one of them.
They ended up obviously collecting Viptela for the global SD-WAN deployment at that time. And then when Amir and I started Alkira, back in May of 2018, we engaged with Dan and his team. Dan talked to us about the Koch’s cloud deployment, at that time, and the cloud infrastructure of that team. And his team had built based on cloud on ramps And then we shared our vision, me and Amir shared Alkira’s vision with him.
Koch seemed, then, especially his team, they had been building in-house cloud network infrastructure to enable and onboard KWS in different regions across the globe for the last few years. And these guys have a wealth of cloud networking experience.
And during the last two years, Dan and his team have provided us valuable feedback on our solutions. That feedback has greatly helped us to build a comprehensive yet simple-to-use global unified cloud networking platform, which was delivered and consumed as a service. And we call it our Network Cloud. And with this, I will hand it to Dan to take us through his presentation.
Thanks, Atif. I really want to start off, first, by thanking you and Syed and the Alkira team for inviting me here today to talk about Koch’s cloud journey. Some of the things that you were saying there early on really took me back to the early days when we first met, and that little Reno office, talking about SD-WAN and the future of Koch’s Network.
A little background on Koch Industries and who we are and what we do and our enterprise. Koch Industries is one of the largest privately held companies in America, and we’re based out of Wichita, Kansas. There is a number of Koch companies some people might be familiar with, like Georgia Pacific and Molex, Invista, Guardian. Guardian, who makes glass, and Georgia Pacific, who makes consumer products, like Dixie and Angel Soft. We also have companies that do ranching and fertilizer, software, data analytics. The whole gambit, really, within Koch Industries.
We’ve reinvested a significant portion of earnings into the business in the past six years and made nearly 30 billion in technology-related investments, alone. I specifically work for KGS, or Koch Global Services. We’re the managed service provider for all the different Koch companies. There’s seven global networks that we maintain for Koch. Over 19,000 network devices that we manage. We’re in 70 different countries, roughly 700 remote locations and thousands of applications. So, as you can see and tell, we’re a pretty large and complex organization.
Going back to where this all started, five years ago, six years ago, as Atif was saying, when we first met. The enterprise is really a traditional enterprise. We have MPLS circuits at all of our remote locations. All of our traffic backhauled to our on-prem datacenters in Wichita and Green Bay. Remote sites had small T1 circuits and BSL1s for backup. Very small traditional network capability. And we’re really a limiting factor for the businesses. The businesses were asking for SaaS-based applications to transform the way that we were doing things. And the timing was ideal, really.
SD-WAN wasn’t in the market. Network technologies were changing. So we had to sit back and think how do we go from being a limited factor or a constraint, to being a force enabler. And what does that mean? What does the future of Koch’s Network look like? We thought about things like the internet is the network, identity is the parameter, any device is a work device, and really cloud is the datacenter. What does that mean, cloud is the datacenter?
Initial thought was, “Hey, let’s move our application to SaaS-based applications.” We were grounded in reality by the application teams fairly quickly. Not all applications can become SaaS-based applications. Some of them have been around for quite some time and it would take a while to refactor them. So what does that mean for the network? Can we at least become more agile and move things into a private cloud or move things to AWS who’s our preferred cloud provider? And how do we provide connectivity to those different cloud providers that are out there, like Google and Azure, and AWI.
We started thinking about datacenter transfers, application requirements. Latency, bandwidth, all of those things. And how do we connect to the cloud? We came up with three different kind of versions and we knew that we wanted to be able to be the last one for a while because we didn’t really see anything in the market that was capable of doing it. The first version – and really our thoughts around the first version were: We have all of this data in our on-prem datacenters and we have to get it into the cloud. So how do you do large data transfers to the cloud? And there’s a number of ways to do that. We chose to bring in large direct connects and build what we refer to as our transport hub. Our physical hub locations in four different locations within the U.S., with large datacenter interconnected between them, and a large direct connect to AWS.
We also looked at Express Route with Azure. We looked at co-locations facilities and we did some experiments with co-location facilities, as well. It was just extremely complex, not just from a technology perspective, but from a process and people perspective to manage and maintain contracts and access and all the things that go into that.
And then we thought about, well, what comes next after the physical? And looking across other technologies and how things have evolved, our vision was, hey, really, physical virtual as a services is how things should transform within our transport hubs. So our physical moves to virtual appliances and our cloud providers. And then at some point hopefully we’ll be able to do as a service if it’s available in the market.
So, what did we end up doing for our first iteration? Like I said, we brought in large datacenter interconnects and direct connects. We spent 18 months, from idea creation to actual production and implementation. We spent $2 million and over 2,000 labor hours actually implementing all this across just four locations within the US. We had to think through who are the right circuit providers to bring in? Who are the right cloud providers? How big of a circuit do we start with?
We actually started with a one gig circuit, at three different locations, and very quickly realized that that wasn’t big enough, and moved to 10 gig circuits at three different locations. What was the right type of hardware to bring in? At the end of the day, we ended with four hub locations, three 10-gig direct connects. There’s five 10-gig datacenter interconnects so that we can do large data transfers between datacenters, as well as the cloud. We ended with geographic internet failover. We reduced our MPLS circuits and internet circuits at our hub locations down to a single MPLS and single internet circuit at each hub location. And we used geographic failover in that space.
And what does that look like? So really, here’s kind of our backbone diagram high level, with the circuits that we brought in. SD-WAN in there, AWS, the different regions that we’re in with AWS, and the latency between them. As it stands, today, between our primary facility and headquarters location in Wichita, to AWS – we’re looking roughly about 45 milliseconds. And that works for the time being to do the large data transfers. Some of the challenges that we faced with the application team was getting them to think differently about how they do their data transfers. Single 2CP stream, versus multiple UDP streams. Those types of things.
When you go from an on-prem datacenter, where you have less than a millisecond in latency between your backend and your frontend, to the cloud, and you increase the latency by 45 milliseconds, it really impacts the amount of throughput that’s available to the applications accessing the backend systems.
Once we completed our Transfer Hub 1 initiative, as well as SD-WAN initiatives, we started looking at how can we virtualize our cloud on ramps? How do we get things more capability than the cloud? And our businesses were asking us for capabilities. They were asking us for local egress for AWS resources out to the internet directly, versus having to backhaul to Wichita or one of our other locations. They were asking for third party ingress, remote site access. Like I said, our first iteration only had an AWS direct connect. In order for a remote site to access AWS resources, they had to backhaul through one of datacenters. So even a remote site that’s sitting in Georgia or Virginia, had to backhaul to Wichita in order to go up to AWS.
So how do we resolve those problems? How do we reduce the latency and increase the capabilities to the cloud? So we looked to virtualize everything and doing a DIY-type capability with cloud on ramps. It took us about six to nine months to design and test and validate everything before actually implementing for our first customer.
The things that we had to consider is really what pieces do we leverage that are needed within AWS, versus what do we add on? So what vendors do we bring in? What technologies do we bring in? Do we do load balancing? Do we do WAN acceleration? Those type of things. how do we facilitate remote site access for sites that don’t have SD-WAN deployed? Or do we make SD-WAN a requirement for remote site access? Do we leverage native AWS load balancers, or do we bring in something like FIs? And then what are the limiting factors?
So, there are some limiting factors that we found out about, as we looked to AWS, and their native capabilities. Even within their transit gateway, the route limitations and the complex routing capabilities just weren’t there for us to meet our needs. And so we really had to build some custom things.
At any rate, we ended up with 10 cloud on ramps deployed. Specifically to two regions in the US. We have about 85 VPC’s, today, that leverage with cloud on ramps. We spent about 1,000 labor hours building that. We have local egress for AWS resources. We have remote site connectivity, and we have ingress capabilities.
So what does this look like? Pretty complex. We have what we call our transit VPC and it has two Cisco CSR’s in there. We land to direct connect on those. It has underly connections to what we refer to as our security VPC, where our firewalls are hosted and where we have third party ingress occurring. And then it also has underly connections to what we call the SD-WAN VCP, and that’s how remote sites come in.
And then we do multiple tunnel to the transit gateway with ECMP, so that we can really take advantage of the throughput capability, there. Otherwise, we’d be limited to really less than a gif of throughput. I think the max that we’d get across a single tunnel was about 900 megs when we did our testing.
And so, at any rate, there’s limited resources that can support this model with our environment. Just because of the amount and complexity. There’s really three different locations in which a default route can come in, and multiple route maps, and policies that in place to block and deny traffic and route traffic appropriately and treat things the way they should be. It’s super complex to manage and limited resources, really, that have access to this environment that can manage this environment and make changes in this environment for what we consider some of our most critical applications.
Dan, if I may jump in on the previous one. When you were going out and connecting to your [unintelligible 00:16:51] service into AWS, did you have to do any cleanup on the host [GPC] side, where [unintelligible] IGW, that you didn’t want going direct out and you wanted to go through the transport hub?
I would say no. When we started thinking about cloud and we thought about SaaS-based applications and then we thought about applications born natively in the cloud, and applications that required connectivity back to Koch, what we refer to as the traditional or legacy VCP’s, those have requirements back to Koch’s network to connect back to Koch’s network. And those were built with no IGW’s or anything like that. They were directly connected originally to our direct connects. And so we did have to migrant them over, but there weren’t any IGW’s or VGW’s involved in that initial deployment of those VPC’s. And so really when we transitioned them over it was fairly straightforward to transition them over. I think flipping them over took less than 5 seconds of downtime, if that.
OK, cool. Thanks.
And like I was saying, fairly complex what we’ve built. Took six to nine months to design and build. And that was only four AWS. If we were to try and do something similar with Google or Azure, we may be able to do it in less than six months, but there was certainly some design and testing validation lead time that was going to be part of that to add in another cloud provider.
As the network architects for Koch, I was sitting back and really looking at these designs and looking at what we implemented and just – it was a lot. A little bit overwhelming on how complex things were. And I think Syed, that’s when you and Atif started talking about the challenges of cloud and the lack of native capabilities within the different cloud providers for the complex networking requirements that large enterprises have.
Yeah, I mean, even today, right? There’s only about maybe five people in the network team that really know how these cloud on ramps are built and how complex they are for Koch, and what all the different things are that go into that and how to make changes without causing an impact. I recall, Syed, you came down to Reno and I was telling you how complex things were and how long it took us. You were telling me that a lot of the other customers that you had talked to had done exactly the same thing, and it had taken them just as long to do it.
I had reached out to one of my counterparts at Enfore who had presented at AWS something very similar to what we had built. Definitely saw a need in the marketplace for – and as the service capability that could abstract the complexity of networking and [unintelligible 00:20:46].
Yeah, exactly. And most customers actually were when we spoke were at the transport hub 1.0 phase, where they hadn’t even journeyed off to building these overlays, themselves. It’s too difficult to do on your own. So most customers that have either [unintelligible 00:21:05], or datacenters, or they had the physical devices, and all the traffic would be hair pinning back to get any kind of firewall services, VPC, 3PC communication. And if it was intercloud, same thing. They would have to bring that traffic back.
Yeah, the moment you bring in [unintelligible 00:21:29] services, like firewalls and stuff, and integrating those with just the – even with [unintelligible]. It just becomes so complicated so quickly. And the complex slide, Dan, that you put up, there, becomes even more complicated when you have multiple clouds and multiple services and traffic steering policies and different use cases that you’re trying to solve with that.
Yeah. You guys really grounded me in reality in some of those early conversations we were having. I was so proud of what we had done with our DIY cloud on ramps, right? It was complex. It was really cool from a networking perspective being able to do something like that and facilitate it for the business. Something to be proud of, right? But then you started asking me questions, right, about, “Hey, how soon until you go to Azure? How soon until you go to Google? What happens when you want to expand regions to APAC or EEU? What other capabilities do you need?”
And thinking through all that, you really start to realize that the complexity becomes exponential. The administrative overhead and management, automation needed to facilitate all of that grows quite a bit, especially when you’re talking about multi-cloud, multi-region.
And then building this infrastructure is one thing, and then managing, running, operating, all the data operations if something goes wrong, troubleshooting, all of this stuff. Having end to end visibility, the details, on [transit] cloud, all of that. It just becomes a monumental effort.
And another way to think about this process, as well. But what I’ve noticed is that most customers that spend a lot of time doing this and building these connectivity into probably a single cloud, after that, they’re very reluctant to actually even entertain the idea of going to multiple clouds, right? So their business may want to go to Google, but they know the effort it took to build the plumbing into a single cloud that they really don’t want to take the advantage. So then the business and enterprises have to force themselves to stick to a single glad.
Yeah. The businesses – some of the businesses come and ask and have thoughts about leveraging Azure or Google. Every time they ask, it was like, “Hey, it’s going to take six to nine months to figure out how we’re going to do that and how we’re going to build that connectivity for you to facilitate what you need.” And then we started thinking about, well, you know, data gravity and how does that impact us? If we don’t have leverage for shared infrastructure between the clouds, or agnostic infrastructure between the clouds then we’ll have to backhaul that to one of our datacenters, right?
So now I’m incurring 45 milliseconds of latency from Wichita to AWS, but then another 45 milliseconds of latency to Azure, Google in the same region.
OK, so, at that point, the Alkira team and Atif and Syed had shared with us what they were working on with Alkira. And so really starting thinking about transport hub 3.x as a service. What are the things that Koch needs to go and test to ensure that we can do that? We put a project plan together, and these are some of the things that we thought about: datacenter connectivity, AWS Direct Connects, AWS, Azure, SD-WAN. How do we provide that connectivity? What throughput requirements? All of these are the things that we wanted to go and test. And that was about, what, a year ago? Almost a year ago, today, that we went out and tested these things within the Alkira platform.
And then as we actually were testing and deploying these things into production environment, one of the challenges that we ran into or some of the more complex things with the Alkira stuff – and I we don’t even say it was Alkira. It was more Koch is just complex, right? So we had to build and underlay from our datacenters to the Alkira platform. And the complexity really is that we have, like I said, seven different global networks and seven different network containers with seven different routing tables all on the same device. And so that’s really where the complexity came in. But that had everything to do with just Koch’s environment and really nothing to do with the Alkira platform.
In fact, Alkira I think you guys provided a fairly concise configuration and design on how we would connect our datacenter to the Alkira platform.
And just to be clear, this is to extend your direct connect existing one and into Alkira?
That’s correct. That was to extend our direct connect into Alkira, and it was around that same time where we realized – I think it wasn’t clear, at least to me, when we talk about all the things that Alkira could do for Koch when it came to cloud on ramps. I didn’t realize that our infrastructure at that point, our physical infrastructure, becomes leveraged. Or AWS direct connect become agnostic to the cloud. They’re connected to the Alkira platform, and we can leverage them to access Azure or Google or AWS across the Alkira platform, which was huge for us. Because when we initially worked at our transport hub, one initiative in bringing in direct connect we had – we were going to have to bring in Azure Express Route, as well as AWS direct connect to connect to both clouds.
Well, now, I can share that AWS direct connect across the cloud providers, which is a pretty big deal for us. This is just a screen shot of one of our lines of businesses, where we confirmed that Alkira was receiving all the routes from that line of business. Roughly 8,000 routes. And so one of the things that’s always a concern for Koch because of how large and complex we are, is the scalability and can you handle everything we’re going to throw at you? I think we learned a lot of those lessons during our [unintelligible 00:28:44] deployment.
At any rate, you can see here that we’re sending roughly 8,000 routes in 1 line of business to the Alkira platform across the 7 lines of businesses. If I was to add them all up it’s roughly in the realm of 35,000 routes is what I would imagine.
At the end of our pilot, this is what we had deployed and for the most part this is still what we have deployed, today, with a few exceptions. But we had 2 CXPs deployed in two different regions. We had Azure connectivity. We had AWS connectivity. We had direct connects. We have SD-WAN appliances deployed. We have PAN firewalls deployed for local egress and ingress capabilities.
And the interesting thing is, like I said before, it took six to nine months to deploy to a cloud provider in Koch’s environment. We stood this up in less than a few weeks. And, in fact, when we wanted to connect Azure for testing within Windows virtual desktop, the end point team came to us with that request on a Friday. The cloud team provision, the account, on a Monday, and the network provision and network on a Tuesday. And they were up and running. That’s how quickly we were able to move. And it only took us those three days because just planning-wise. It had nothing to do really with the key [unintelligible 00:30:26]. Change controls and planning collaboration from the kitchen and stuff.
Well, there was the weekend in there.
So, here, one of the biggest things in the [unintelligible 00:30:44] slide that you have is the extending segmentation across regions. It takes months and much longer if you try to do that a physical base under Azure or across global facilities to extend multiple segments. That was one big thing for Koch. To be able to maintain that segmentation.
Absolutely. Even in our DIY cloud on ramps where we have lines of businesses in multiple regions, we have to do BGP peering and VPC peering between the different AWS regions. That can be painful to manage and troubleshoot when there are problems. And so Alkira doing that for us and extracting that complexity and simplifying the management is very helpful. It allows us to leverage other resources that may not necessarily have that in-depth technical skill that it would take if you had to do it yourself, versus subscribing to a service.
We saw the same thing with our security team, right? So we deployed the PAN firewall. Alkira solved a major problem for us around autoscaling the firewalls. In our own DIY solution, we weren’t able to auto scale our firewalls. And so Alkira can do that for us. We have flexible firewall deployment and policy creation through the intent based policy within the Alkira platform. We can redirect traffic to PAN firewall as needed, or always have it there to address issues. And really reduce the complexity of managing that platform. In fact, one of security engineers that had been working on the Alkira platform, he’s a – what’s the word I’m looking for, here? He was an intern, last year, and so he’s still new into the networking and security industry. But he was able to jump right into the Alkira platform and start managing it. Whereas, historically, with our DIY capability, we had like our most senior engineering resources on that to do the configuration and deployment.
It’s awesome working with him. My favorite part about this is customers can really focus on the security policies and just pushing the policies and then selecting which specific type of traffic that needs to go and not have to worry about bootstrapping, the VMs, worrying about maintaining traffic symmetry, and how do I stitch traffic to actual instants. Not only stitching it and then stitching it on per business unit, or per segment basis. It becomes very challenging, difficult. I know your team, security team, especially, really liked the fact that they could focus on pushing the security policies and everything else is taken care of.
We did some performance testing from our datacenters to the cloud through the Alkira platform. I think it’s important to keep in mind that during this performance testing we were testing to confirm expectations, not max out the platform. We wanted to ensure that the throughput we were seeing, the latency we were seeing was expected and really what we were paying for is part of the service. So, from our on-prem datacenter, in Wichita, the AWS, we were getting 42 milliseconds. As you can see in the previous slides, that’s what we kind of expected. We were getting 45 milliseconds before, and really that’s just a factor of the geographic distance between Wichita and US East.
And then we were getting about 80 megs of throughput on a single TCP string, which is expected due to the latency. And then when we did run multiple UDP strings with iPerf, we were seeing that we were getting the full capability or capacity of the available bandwidth. We were able to achieve 1 gig of throughput for that testing. And so that really met our expectations. Like I said, we weren’t trying to max out the platform, we were just trying to confirm expectation in the service we were receiving from the Alkira platform.
We also did cloud to cloud testing. Between clouds, we were seeing 6 milliseconds of latency and over 180 megs of throughput. These are on single TCP stream between the clouds for testing. Really just looking at what we expected to be getting. Not necessarily trying to max out or break the platform and push it to its limits.
But this, in itself, was kind of a big deal for us. I mentioned earlier about data gravity and having to backhaul to our datacenters if we wanted to connect two clouds together or leverage two clouds. So kind of a big deal when you think about, OK, if I’m getting 180 megs on a single TCP stream, if I change that to UDP and leverage multiple streams, what can I achieve throughput-wise?
And then we talk about Day-2 operational visibility. These are the things that are available today in the platform. You can see bandwidth utilization, firewall utilization, scale. There’s detailed logging, application and policy, trouble shooting. Alkira continues to add functions and capabilities in the space.
Moving on. As we look to the future, what did that look like? As I think about Koch, and what Koch needs to be successful and all the things, initiatives that we want to drive forward with, like I said, we’re a fairly large and complex organization. Some of the things on our roadmap are driving away from our on-prem datacenters and going all the way, all in into the cloud. In order to do that, we have some complex routing capabilities that exist in our on-prem datacenters for business to business or cross-container communication.
A lot of these things, Alkira can help us with, or will help us with. We’re working very closely with them. Some of these things we haven’t implemented today, even though they’re available within the Alkira platform. And that’s simply just a matter of Koch’s complexity and us having to plan and prioritize when we’re going to do that work.
Major takeaways and really what I’d like the audience to remember from this talk is that end-to-end deployment for Koch went from months of planning to just weeks and days of planning. Configurations went from hours of configurations, or days of configurations, down to hours of configurations. And even, in some cases, minutes of configurations. And then provisioning time really changed. It’s very quick, minutes of provisioning time. Whereas before we spent days and hours configuring and provisioning things. Now, things are fairly – very quick within the Alkira platform.
We enabled multi-cloud opportunities within Koch that didn’t exist before, really driving demand and business transformation within multi-clouds. High throughput/low latency between clouds. Network and security services in the marketplace within the Alkira platform. And then we continue to reduce and optimize our operations through the use of the Alkira platform.
You know, Dan, I got to add, here. I love the call that we had one time with you and Mike Worthington, your co-architect on the call, and you were debating how fast the network team was. Or was the network team faster or the [unintelligible 00:39:40] faster? That’s usually unheard of.
Yeah. You know, I said early on that the network was a limiting factor or constraint when it came to business transformation. We’re just not able to move as fast. And then when you have the end point team wanting to do Windows virtual desktop, or you have an analytics team that wants to use Google for their analytics capabilities, and we can’t facilitate [unintelligible 00:40:10] to a different cloud provider for six to nine months? Being able to do that within just a few days is huge. The opportunity coset there is pretty amazing.
Cool. Thank you. Was this your last slide, then?
I believe so.
Looks like we have a couple of questions that came in that we can take. We have a few minutes. The first question is: Is Alkira a managed risk provider and do customers lose visibility and control they’ve had. Atif, you want to take that one?
Sure. No, it is not a managed service. It’s a good question. We need to clarify that. It’s not a managed service, though we have partners who are working on providing managed service to their customers using Alkira’s service. So our service – I’ll go into a little bit more detail. Our service is running on a [unintelligible 00:41:16] software platform. That has been designed from ground up. And it allows customers to build their network using our platform. So you can think of it as in a way a network infrastructure as a service.
So, you know, building a network is one thing, but then operating the network is another, as I said earlier, also. You need like full data operations and functionality and features to offer it. The visibility, monitoring, [unintelligible 00:41:42] capabilities are key requirements for our customers. Dan was also showing it in his slides. And our platform provides all of it. So whether it’s applications visibility, flow visibility, routing visibility, or controls such as network access controls, or application level controls, or troubleshooting and monitoring tools. So, yes, it’s not a managed service and it’s a platform that allows customers to build and operate their network.
I would just add on to that, if you don’t mind, Syed, as being someone who has leveraged the platform, the Alkira platform, I don’t think it is a managed service provider. I look at Alkira as similar to Compute would look to AWS, right? Alkira helps facilitate things for us and simplified deployments and management pieces. But I still have all the control of how I want to route traffic, where I want traffic to go. All of those things are still controlled by the network team within Koch.
Awesome. That’s the goal, right? to be able to give customers the visibility and the control that one argues that it was hard to get, actually, without a solution like this. Thanks for that.
Dan, maybe the next question you can take. When did Koch start thinking about multi-cloud or taking advantage of other cloud services? Was it during would you say the transport of 2.0 timeframe – 1.0?
I’d actually go all the way back to the beginning with our transport had one dot initiative, where we built the large hub locations in the US. Like I said, we experimented with Azure Express Route because we did see a demand. Our businesses wanted to leverage Office 365. They wanted to leverage Windows Virtual Desktop. They want to leverage the Windows capabilities that exist within Azure. It was just so complex and costly to manage an Express Route — multipole express route, as well as multiple AWS direct connects. Very complex and very costly is what I would say. I’m sure there was other factors that played into it, but from a networking perspective and from a network architecture perspective, for me it came down to cost and complexity of managing two different cloud providers. KWS is our preferred cloud provider and so we went that direction and held off on the multi-cloud capability, you know, four or five years ago when we did our first iteration.
Thanks. Thanks for that. It looks like we’re right at 45 minutes, so we’ll take one more question. Atif or Dan either one of you can chime in on this one. How familiar do customers need to be with cloud native constructs when using Alkira?
I’ll jump in there. I did a lot of the initial testing back in the early days of our cloud journey with AWS and Azure. And I had to learn two different constructs. I had to understand how AWS did its network connectivity and provisioning, versus how Azure did it. A VPC versus a VNet. What the different capabilities were. How to connect it all? It was a lot of learnings that went on there for me. We had to train our operations team on the AWS stuff, with the Alkira platform. That’s all abstracted. You don’t have to know the different nuances between the platforms. Don’t get me wrong, you still have to be a network person. You still have to understand networking. There are some concepts that you need to learn. If Alkira had been around five years ago when we did our initial transport hub, we probably would be multi-cloud, today. But we’re not.
Cool, thank you. Atif, would you like to add anything to that?
I think Dan summed it up very well. It’s one network that connects all clouds, public and private, datacenters, branches, even the [segregated] workforce. SaaS applications, internet, etc. And it’s all done in one way where all of these are just endpoints. This is what Alkira makes possible – one network with all these endpoints. One way to connect managed and