Viptela is now part of Cisco.

Optimizing the WAN for AWS & Azure

Viptela SD-WAN now offers the ability to instantiate the same vEdge CPE, that is available in branch, campus and data center environment, as a software router in public cloud environments or on-premise as a Virtual Network Function (VNF) for Virtual CPE use-cases. Enterprises can utilize this software SD-WAN solution to stitch IaaS workloads in public cloud environments into their SD-WAN overlay allowing seamless and secure access to these compute environments from any branch endpoint. Enterprises can also use this software router to build meshed connectivity across IaaS resources residing in a cloud region or across cloud regions.

The Viptela Software-Defined WAN (SD-WAN) platform delivers an agile, cloud-ready network infrastructure. The major benefits of the SD-WAN technology is a single overlay architecture with centralized policy & management. Viptela SD-WAN has been deployed by the largest banks, retailers, conglomerates, healthcare providers and insurance companies.

Webinar Highlights

  • Today’s WAN challenges in accessing WAN for AWS, Azure and other IaaS applications
  • Process of instantiating and configuring a vCPE SD-WAN instance in the cloud
  • Analytics and visibility across the overlay multicast
  • Resiliency considerations while optimizing SD-WAN for Azure & AWS

Download Slides Here

Presenters

Ariful Huq
Ariful Huq Product Manager, Viptela

Ariful has 11+ years of experience spanning product management, sales, network architecture, service provider networks and operations. He has managed a number of product offerings in the MX routing platform and helped secure opportunities with large Content Providers, Public Cloud Providers, Cable MSOs and Service Providers across North America, Asia and Europe.

Ramesh Prabagaran
Ramesh Prabagaran VP of Product Management and Marketing, Viptela

Ramesh Prabagaran has a track record of bringing disruptive and innovative networking products to market focused on carriers and enterprises. Most recently at Juniper Networks, he was a senior product line manager establishing the product vision for enterprise and datacenter routing products, and WAN-focused solutions for Fortune 100 companies.

Transcript

Ariful: Hey, thank you, Courtney. As was mentioned, we are going to be talking about optimizing SD-WAN for AWS and Azure connectivity. I’m going to do a quick sort of introduction into hybrid cloud, you know, what are some of the adoption and triggers for that adoption? We’re going to talk about how customers are actually connecting into public cloud providers today for hybrid cloud, what are some of the challenges that they’re going through. Then, we’re going to talk about Viptela, our solution and how it actually solves this problem of connectivity in the hybrid cloud.

With that said, let me start by talking about hybrid cloud adoption and triggers. There’s, what I’m showing you here, is research done by 451 Research. They looked at adoption of hybrid cloud among multiple companies. Overall, the research states that three-quarters of the companies that they have interviewed, enterprises that they have interviewed, actually have adopted or are moving toward adopting hybrid cloud. As you can see, 16% preliminary investigation, 42% under active evaluation, which typically means that they have some percentage of their workload that they are testing in a public cloud environment.

There is a small percentage that are still kind of looking at the architecture, but having said that, really the point here is that a lot of enterprises have already started adopting enterprise cloud and are actually taking steps towards that. What are some of the triggers to doing this, right? A lot of the use cases have been around workload elasticity. If you’re typically building a private data center in-house, you’re going to have to build for peak capacity. If my peak capacity, if that’s what I’m building for, then there may be a lot of wastage of capacity during my downtime.

How do I do things like cloud bursting? Essentially, some percentage of my workload resides on premise and that steady-state, but if I actually need more capacity, I can easily burst into a public cloud environment. That is one of the use cases. Other use cases are on operational agility. If I want to quickly test out the deployment of an application in my environment or I’m doing some new application development, why invest in buying compute resources for my on-premise data center as opposed to just spitting something up in the cloud environment, testing it out, making sure it works and then deploying it wherever I need to deploy it? Right?

Proximity to consumers. This is becoming more and more important as customers across the world, multinationals, are trying to serve their users much more efficiently. Users that are accessing a cloud resource instead of having it deployed in a centralized location, perhaps in a private data center, why not distribute that workload across multiple geographies? Most of the public cloud providers out there have a very, very good presents in pretty much every continent, right?

There are public cloud providers in North America, South America, Europe, the Middle East and in APAC as well. They have multiple regions within those continents, so proximity of applications to users to improve user experience is certainly one use case. Resilience and availability is extremely important, and this is a good example of the sort of what happened with Delta a couple of weeks back, where some of their systems were down because of an electric failure issue in one of their data centers.

If you think about, if Delta actually had their workloads deployed in a cloud and had this across multiple regions such that a failure within a specific region would not have caused that type of an issue. Right? If you look at a public cloud provider like Amazon or others in this space, even within a specific region, they actually have multiple availability zones. And availability zone, in that case, can actually be a specific data center within that region. Even if there is a failure to an entire data center, you can still have access to your application to that region, let alone doing regional level sort of resiliency.

These are some of the triggers that, in talking to our customers, we’ve found they definitely reflect the triggers that’s causing them to adopt hybrid clouds. Now that we’ve talked about why hybrid cloud is important, let’s talk about how hybrid cloud is actually being deployed today. I want to start this off by conversations that we are having with customers around our solution, the fact that they’re looking at our solution for the branch and what’s actually triggered them to look at us for a hybrid cloud deployment.

Let’s talk about how it’s being deployed today. Predominantly, the way it’s being deployed today is point-to-point IPSec tunnels going from a datacenter, typically, into a public cloud provider. Most public cloud providers out there offer a VPN gateway and these VPN would terminate IPSec tunnels coming in from the customer environment. Your building multiple IPSec tunnels going in for multiple datacenters across, perhaps, multiple regions, because you may not be going into a single public cloud provider in one region. You might be going into multiple regions, right?

If you have multiple public cloud providers, perhaps one set of applications for AWS, another set of applications is Azure, then you have to maintain different IPSec tunnels for different public cloud providers. This is the typical way of connecting into a public cloud environment today. When we are having conversations with a lot of customers, I’ll give you an example, at that provider, they were actually looking at, they have multiple private data servers today and they are hosting a lot of their applications in AWS today. It’s geographically distributed so that they’ve got workloads in multiple regions, and so this is a customer that is hybrid cloud today.

For them, the way to connect into the public cloud provider is to have multiple connections. It’s to have typically an Internet-based connection, which is a public connection, and to have and MPLS-based connection, which is the private way of connectivity. Then, they’re asking themselves this question, how can I scale this deployment as I have more public cloud providers or more regions or more instances deployed. A scalable solution needs to come about. Then, the question they’re asking themselves is, “If I have multiple connections into a public cloud provider, how can I use these connections more efficiently?”

If I have a public connection and a private connection, perhaps a set of my applications go over my private connection. If there’s a brownout situation on my private connection, if there’s some sort of high latency. That’s a result of the specific carrier that I’m using my private connection for. How can I switch those applications over to a backup connection, which could be your public connection.? These are some of the resiliency questions they’re asking themselves as they’re adopting hybrid cloud.

Another example is a large financial institution that has adopted public cloud for all their applications and they have hundreds of DPCs across multiple regions. What they want to do is they also want to connect between these DPCs. They want to be able to say, “Okay, DPC A, B and C in a specific region must talk to DPC X, Y and Z in another region. This must be done in a meshed manner.” All those DPCs eventually have to talk to a single datacenter. They need a very scalable solution for their connectivity between on-premise and cloud environments. A lot of these customers that are actually looking at our solution immediately see the benefit that we can offer them in the branch, and they’re thinking to themselves, “How can we take these benefits from the branch into a public cloud environment?”

Let’s talk about some of the challenges that these customers face, so scale is one of the most predominant concerns. As you scale the number of regions, as you scale the number of DPCs, maintaining these IPSec tunnels can actually be very hard to do. That’s exactly the problem that we see in the branch, when you have multiple branches connecting over a public environment, you’re building IPSec tunnels for those environments right? There’s a scale challenge there, right? How do you maintain all these IPSec tunnels?

Isolation and security. This relates to in an on-premise environment, I can isolate my workloads across lines of business. Specific lines of business could only have access to specific computer resources. Right? How can I extend that level of isolation security into a public cloud environment? That is the question that [they] are asking themselves, so that’s very important to also solve. Resilient access, we’ve kind of talked about this. If I have multiple access, connections into more public cloud environments, how can I make sure I’m using all his connections and how can I steer my traffic across multiple connections? If there’s a brownout situation, I can sail over, right?

That’s another set of problems. Inter-region peering. If you have multiple VPs across multiple regions, how do I peer across those regions, and in a very scalable fashion? Centralized monitoring and management, right? If I have a bender that’s deployed for my branch and I’m utilizing a completely different solution for my public cloud connectivity, how can I do centralized monitoring and management? I wanted my branch vendor to also have the ability to extend their solution into a public cloud environment so I can have a single pane of glass to manage and control my endpoints, whether they are in a virtualized environment sitting in a data center or they’re actually, it’s an actual branch, a physical branch CPE sitting on my branch locations.

Application visibility and steering. Again, this relates to how we can get more visibility into the applications that are going to the public cloud region and how I can use that information to actually steer traffic over connections that I have going into those regions. A lot of the customers that we’ve been talking to, these are some of the challenges that they face. They’re asking themselves, and vendors like ourselves, how we can solve these problems. Right?

Just a quick introduction on the Viptela approach, so we can kind of set the stage for how we are helping our customers solve this problem. The Viptela approach to the WAN is very, very unique. What we’ve come up with is a horizontally-scalable solution. When I say is horizontally scalable, we’ve essentially come up with a solution that decouples the control plane from the routing and security. What we have is a set of controller elements that can be deployed in a public cloud environment or in your private data center.

We have distributed forwarding elements which can essentially go into small office, branch locations, campus, data center or into a public cloud input as well. Right? We have distributed forwarding elements and a centralized controller. That is fundamental to our solution. We have decoupled control planes for routing and security from the forwarding plane. What I mean by that is our controllers think of them as route reflectors, right? They’re actually reflecting routes between all these end-points, but also they play a very, very important role in setting up the secure connectivity between all these end points.

That is fundamental to our solution, and the way our customers deploy the solution is no matter what, if you have 100 sites, you start with a certain number of controllers and you can go to tens of thousands of sites, because as you’ve increased the number of endpoints, all you’re doing is increasing the number of controller instances. That is what it means to be a horizontally scalable. You start small, as you grow big in your deployment, you just add more controllers into your environment and that scales the solution. Right?

Zero-touch bring-up. As was mentioned earlier, we have the ability to build secure IPSec connectivity between all our endpoints, and we do this in multiple fashions. You can do it hub and spoke, you can do a meshed type activity, which is becoming very, very popular now, as well. You do determine the topology. When the device comes up, it gets the configuration, it actually determines its topology and start connecting it into all the elements. We do it in a zero-touch manner so there’s no IPSec configuration, there is no pre-shared keys that you have configure, there’s no certificates that you have to maintain.

There is no public key infrastructure that you actually have to maintain to do any of this. Right? This creates a very, very easy IP fabric for you to utilize across any of your endpoints, right? We encrypt the traffic across those endpoints, so building a large-scale IPSec mesh is no longer an issue with our solution. Zero-touch secure overlays, so we talked about that. Essentially, the overlay can be over any type of transport. We use Internet MPLS, 4G LTE is becoming more and more popular as well. We treat transport as an IP fabric, so we just use any type of transport that we have visibility into.

Then, there is a centralized monitoring, policy management and configuration. What we show here is called V-Manage. V-Manage is our centralized management plane. It’s the single pane of glass through the entire solution. From there, you actually do your configuration management. You do policy enforcement, all of that through V-Manage. This is the very interesting point of the solution. When you define policy, you define it once and there are multiple endpoints where it’s enforced. Those endpoints can be on premise or in the cloud.

Going back to the discussion we were having earlier, having the ability to maintain policies, whether it’s on premise or in the cloud environment, is very, very simple with our solution. Let’s talk about how Viptela is actually looking at this solution. Really, the takeaway from this presentation is manage the public cloud WAN like a branch. All the problems that we’ve highlighted, these are all relevant problems in the branch today for a lot of the customers we’re talking to.

We think of the public cloud WAN just like a branch. In a public cloud environment, what you have is, on the LAN side, you just have compute for structure that’s being hosted by the public cloud provider. It’s just compute resources that your servicing. In the branch location, what you have is end-users, right? Typically, end-users in a branch location. The requirements don’t really change from the WAN perspective, right? You’re just servicing different end-users. You have compute infrastructure in a public cloud environment and a branch environment, you have end-users that are typically sitting in the branch location.

In a public cloud environment, in a branch location, you might have multiple connections, right? You have the ability to use a private MPLS connection or internet. The branch customers would like to be able to prioritize traffic between those types of connections. They would like to be able to get visibility into which application they would like to spend over a private or internet connection. The same concept applies in a public cloud environment. Most public cloud providers today have the ability to terminate internet and a private connection. For instance, Amazon calls this Direct Connect, right? Microsoft Azure calls it ExpressRoute.

Typically the way the private connection happens is you go into your service provider or whoever your provider is, the last mile provider is, and you basically ask them, “Can I build a private connection all the way into the public cloud provider?” In most cases, they have the ability to do that. When you have both the public and private connection in your public cloud region, now you can actually steer traffic in the right direction. You can get visibility. If there is a brownout situation in one of those connections, you can sale traffic over in a very easy manner.

You can maintain connectivity and determine performance metrics all the way into your public cloud region, right? Then comes the points around security and segmentation. Security and segmentation is very, very important. The branch, we see a lot of our customers, separate lines of business or separate end-users from IOT devices, or just [preparing] things like lateral movement. If one endpoint is breached, you don’t want that endpoint to be a hop or sort of a host and be utilized then to hop into other environments that you don’t want to give that host access to. Right?

VPN segmentation or isolation is very, very heavily utilized in the branch. Now, the same concept sort of applies in a public cloud environment. You’d like to separate your computer structure by lines of business. Having the ability to extend VPN-based segmentation all the way into a public cloud environment and then the separating out your computer infrastructure by line of business becomes very, very interesting and very important. That’s another problem that you can kind of solve with our solution.

Finally, there’s the centralized configuration and policy management, which we alluded to earlier. The configuration and policies that you apply on your branch locations, all of those get applied exactly the same way into your public cloud environment as well. Really, the summary is, as you can see, the fact that you can do a lot of things that you are doing in the branch today, you are extending those capabilities into the public cloud environment, and you can do this in a very scalable fashion, right?

I’m just showing you two public cloud providers here, rather one public cloud provider with two regions, but really you can have multiple regions, multiple VPCs within that region. You can have multiple public cloud providers as well. All the endpoints just be treated as part of your overlay. It doesn’t really matter where the resource resides. We just treat these as end-points that become part of the Viptela overlay.

Let me go over an actual deployment scenario with Amazon Web Services. This comes, actually, with some experience that we’ve had with deploying this exact deployment with one of our customers. In this scenario, we have a generically named AWS Region A, but AWS has multiple regions. It could be any region. Within the region, there’s an availability zone. There’s multiple availability zones, I’m just showing you here an example of a single availability zone. Right?

Within that availability zone, you basically have a private subnet and the public subnet. Just to kind of explain that a little bit further, within a VPC, a virtual private cloud instance, that’s the name of a private instance within an AWS, you have the ability to define private and public subnets. The private subnet is where all your workloads, or where your applications can reside, and then your public subnet is where the Viptela V-Edge instance would reside. We have an AMI, an Amazon Machine Image, that you can instantiate as a VM. Then, you essentially put in a public subnet.

As far as connectivity into AWS, you could connect to direct connect, which comes in through what we call a VGW, or VPN gateway, or you can come in through the Internet, which comes in to what we call an IGW or internet gateway. The public cloud subnet has access to both of these transports, right? You have a route towards the IGW and you have a route towards the VGW. The private subnet would have a route towards the public subnet, or what you could do is you configure the VC router to ensure that it all your private subnet traffic goes to your VH cloud instance. Right?

In this case, what we are saying here is connectivity between the datacenter branch, V-Edge, that’s deployed in a private data center and is connected into the public cloud environment over multiple use connections. This specific customer, what they did was they actually utilized both of these connections in active, active fashion, and they were doing spearing of traffic based on application visibility across those different connections and they had defined performance SLA, so they defined a specific application should have a specific SLA, and if my primary connection, which could be a private connection, does not meet that SLA, switch that traffic over to the internet connection that has, potentially, the ability to meet that SLA in that timeframe. Right?

You can kind of prevent these brownout situations where if your SP network is going through some sort of outage, you can then steer the traffic into the other connection. As far as routing is concerned, we have the ability to, essentially, automatically advertise the routes that exist within the enterprise all the way into the VS cloud instance that’s residing in the public cloud environment, and then you have static routes that you define here. All those get [advertised] over as well, so routing is done in a very dynamic fashion as well.

Once all the transport is set up, automatically the IPSec tunnels get established, the routing gets advertised between those two endpoints, and you have your connections done in a very easy fashion. This is something that actually we’ve deployed and it works and we have experience doing this.

To summarize the conversation kind of just to highlight some best practices for hybrid cloud connectivity, any-to-any connectivity to remove network chokepoints and a better user experience. We talked about how you can deploy the Viptela solution on-premise in branch locations and data centers, canvas environments and now in a public cloud environment. No longer do you have two backhaul your traffic into the data center, build an IPSec [tell] from that datacenter into a public cloud region. That, potentially, can be a choke point. Your datacenter could be a chokepoint in this case.

Now, the fact that you have secure, any-to-any connectivity between your branch and the public cloud region, you can build this full-mesh and prevent these sort of chokepoints, and essentially result in better user experiences, no matter where your users are located. Visibility for application steering and resiliency. We can’t tell you how important this is, because if you’re utilizing multiple connections going into your public cloud region, having the ability to determine which application goes over which connection, and also maintaining performance SLAs. What we do is we actually utilize BFD between our endpoints. We’re measuring lost latency and jitter across those connections.

You can, based on those characteristics, determine which application should go over which transport. More than anything, you can get visibility in those connections, right? How is my link to my public cloud environment performing? Then finally, segmentation and isolation of workloads for security and compliance. This comes up quite a bit, and now, as you can imagine, with the Internet of Things, in your branch location, you have more and more IPN points, right?

What you want to do is really isolate your IPN points by groups of function. My Internet of Things should only have access to specific applications that I’m hosting within my public cloud environment. My other applications that are related to, perhaps, my credit card transactions or my banking applications should have only access to specific applications that reside, perhaps, some within my private datacenter or specific sort of resources within my public cloud environment.

Doing VPN-based segmentation across multiple lines of business is becoming so important, as a matter of best practices, right? These are some of the things that you should take away from this conversation and look into when you’re actually deploying a hybrid cloud solution. With that, I’d like to conclude this webinar. I’d like to thank everybody for their time and listening to us. Certainly reach out to us if you have any questions. All of what we’ve presented to you today is available with our solution today, so we’d love to work with you in kind of going through our solution, and if you are kind of familiar, doing some deployments, kind of demoing our capabilities to you as well.

Definitely reach out. Everything that we’ve shown you here today is deployable today. Thanks.

Watch Now