Best Practices for Extending the WAN to AWS

Customers are increasing their production utilization of IaaS services with Amazon Web Services (AWS), Microsoft Azure, Google Cloud and others. The extension of Enterprise workloads into the Cloud presents a new set of challenges with regards to managing your WAN connectivity to ensure optimal service, performance, and security.

Providing end-to-end security and connectivity for your workloads is no easy feat, but it is even more complicated when trying to seamlessly extend the WAN into the Cloud.
To address these challenges, customers are deploying Software Defined WAN (SD-WAN) as the underlying technology to provide secure connectivity for applications, unified communications and branches seamlessly.

During this webinar learn best practice guidelines for extending your WAN to the Cloud with SD-WAN. Utilizing production customer deployment examples, the team will educate you on the how to deploy Viptela vEdge within the Cloud or as a Virtual Network Function (VNF) for your CPE. Utilizing these approaches, customers have been able to effortlessly secure access from any endpoint to the Cloud, branch and data center.

Webinar Highlights

  • Extending your WAN to the cloud
  • Real Use Cases
  • Customer Deployment with Scalability

Download Slides Here

Presenters

Rob McBride Viptela Product Management and Marketing

15+ years of experience as a senior systems engineer and senior product manager working across software and business lines.

Ariful Huq
Ariful Huq Viptela Product Management and Marketing

11+ years of experience spanning product management, sales, network architecture, service provider networks and operations.

Transcript

Female Voice: Good morning, good afternoon and good evening depending on where you’re coming in from. We are certainly glad everyone is joining us. Today’s webinar is on the best practices for extending the WAN into AWS, with SD-WAN, of course. Our presenters today are Rob McBride. He brings 15+ years of experience as a senior systems engineer and a senior project manager working across software and business sides. We also have Ariful Huq, with 11+ years of experience, spanning product management, sales, network architecture and service provider networks through operations as well. We’ll go ahead and let the guys take it from here. We encourage you to participate with the Q&A box as well as the chat window within the platform. Thanks so much.

Rob McBride: Thank you. I am Rob McBride and with me is Ariful. Today, we want to walk through some high-level best practices for how you can extend your WAN into Amazon Web Services, utilizing them as an IaaS, specifically with SD-WAN. By all means, reach out to us. You see we have our Twitter handles here and at the end of the session, you will see some other resources that we have outlined for you to kind of look at for post-event information.

For those of you who may be potentially new to joining us in our webinars here, in our last webinar series, I had walked through a quick kind of SD-WAN 101 and highlighted some of the various different basics associated to this kind of trending technology. To tee that off, I talked about some various different industry trends that functionally are potentially impacting you and how you are looking at network architectures between cloud, mobile and social. I put together a story for each of those three.

Today’s purpose, actually, is going to be focused specifically on the cloud, and even to be more specific to that is actually looking at Amazon Web Services and how you can extend WAN into that. The reason why this impacts you is because there is a lot of hybrid cloud strategies or just overall hybrid WAN architectures that you may be evaluating. Cloud is very, very important to how you are looking at the impact of either bandwidth, cost, or just in general—a kind of elastic scale to your overall enterprise.

Again, it kind of helps some of those who may be new into this and I will be very, very brief. Really, the quick basic of what is SD-WAN. Very simply, SD-WAN is really just an approach to architect the WAN utilizing software-defined networking in an effort to optimize as well as control your traffic between your various different locations. When you look at the actual mechanism, how it is actually done, at its simplest, it is really about creating an encrypted overlay and utilizing the controller infrastructure that lays on top of an existing WAN transport infrastructure, whether it is MPLS, broadband, LTE, etc.

Then, we take a look at some various different kind of bucketized values or benefits that are inherent to SD-WAN in general. You look at things from monitoring and operations or analytics, policy infrastructure, the actual forwarding infrastructure in and of itself, as well as then the underlying transport independent fabric. Before we move into the specifics of cloud, again, this is just kind of to drive home a little bit about some of what SD-WAN gives you, just for those new members to our webinar series.

There are four buckets here from values and benefits. One is about operational simplicity, one is about attaching to a hybrid-based WAN kind of strategy, getting deep application and cloud awareness, which we are going to go into a little bit today, and then actually having a secure and routed infrastructure.

I want to go through each and every one of these buckets, and the first thing I want to do also is I want to kick off a poll. Just go ahead and enter some of these questions before we kick off. It helps us understand where you are coming from and it also helps you kind of see what some of your peers in the industry are actually doing.

With that, enough of the kind of high-level marketing spiel here. Really, the value for these webinar sessions is really having our whole—it brings a lot of deep product experience to kind of talk about Viptela’s vEdge platform. It is integration or utilization with AWS as your cloud provider. With that, Ariful, take it away.

Ariful Huq: Thanks, Rob. Thanks everybody for joining our webinar today. I am going to kick off the discussion by just kind of laying down how things are done today, how you do hybrid cloud today. All of this comes from us talking to customers. These are customers that are essentially going towards a WAN transformation, so they have approached us because they are thinking about an SD-WAN-like technology in their branch, and at the same time, these are the customers that are also deploying hybrid clouds.

We are kind of working together to figure out there is a WAN transformation piece of it and also you’re moving your workloads to a hybrid environment where some of your workloads reside on-premise and some reside on the cloud. How do you do this today? Can SD-WAN actually help you? What are your challenges in the space? I will start off just kind of talking about the existing deployment scenario and how things are done today and what are some of the challenges in this space.

In most public cloud providers, you can typically do an IPSec and a point-to-point IPSec tunnel into their VPN gateway from your on-premise data center. You can even do this over a direct connect, as well. If you’re going over a public transport, you’re typically doing a point-to-point IPSec tunnel. If you’re going over some sort of private transport, you’re going to do it over a direct connect. We’ve even seen customers tell us that even if they’re using a direct connect, a private circuit, they still end up encrypting that traffic. This is very much the case from a compliance perspective.

We’ve heard a lot of our customers say that in order to maintain compliance, it is no longer sufficient to just send traffic over a private circuit and assume that it is okay for compliance. You still need to encrypt that traffic. Invariably, you do end up setting up point-to-point IPSec tunnels, and when you start doing something like that, there are scale challenges. You end up building a point-to-point tunnel, so there’s a lot of scale challenges there.

We’re going to close one of the polls and we’re going to look at the results, and we’ll open up another poll. We’ve got two more coming. Okay, it’s altogether. From a scale, there’s certainly challenges there. Isolation and security. So, if you are essentially doing segmentation in your data center because of different workloads, you want to isolate those workloads all the way into your private cloud environment. How do you end up doing that? That is another challenge that we see in this space.

Resilient access. It is possible to use internet as well as a direct connect, but at the same time, how do you ensure that if there’s a brownout situation, one of your connectivities is not performing as well, or you want to be able to define which application goes on which type of transport, some of those things are not possible today. Resilient access becomes very important.

Inter-region peering. In most cases, our customers do not just set up a single public cloud region. They may be setting up multiple regions, so when you have multiple regions, how do you ensure region-to-region connectivity in a very seamless fashion? Finally, centralized monitoring and management, so the on-premises solution that you have, whether it’s in your data center or whether it’s in your branch, how is it the same as what you deploy in your public cloud environment?

You want to maintain the same policies. You want to maintain the same management and a single pane of glass for the devices you manage on premise as well is in the cloud. Application visibility and steering, I kind of alluded to that a little bit during the previous points.

Rob McBride: Ariful, I want to pause here for a little bit, just kind of on the poll perspective just kind of let everybody know how you guys were responding. It’s really interesting, on the question regarding transport importance, an overwhelming number of attendees, I think it was about 74% of the poll, actually, identified that was important from a transport perspective. From a policy aspect, as far as lack of visibility or control, about 40% of respondents identified that they found that to be challenging, utilizing native tools and a couple of our respondents were unaware if it was challenging or not.

From a virtual router aspect, I believe that there were some that it seemed to still be a question as to whether or not it was being utilized in infrastructure. I just wanted to pass that along before we move on.

Ariful Huq: No, that’s really good feedback. I think the resiliency point is really important, and we’re going to cover that in the deployment scenarios. We’ll actually go through a customer deployment scenario, what they had in their network before and what they did after. You’ll see it will help you answer some of the questions that just came up.

Just a brief introduction, as far as Viptela and what we’re really doing in this space. Clearly, if you kind of understand the Viptela solution, we have distributed forwarding elements or CPEs, and then we have centralized controllers, so that’s part of our SD-WAN solution. We are transport independent. What I’m showing you here is essentially the ability to deploy branch Viptela-Edge routers. They can be in data centers as well, and the ability to spin up a virtual instance of our software sitting in an AWS environment.

Essentially, what you’re doing is you’re taking the WAN overlay that you built between your branches and your data center over a transport independent medium and you’re extending that all the way into your public cloud environment. Actually, all the way into a virtual private cloud instance that sits in your public cloud environment. In doing this, essentially you’re able to securely bring your traffic into that environment, be able to do end-to-end segmentation, and then you’re able to make use of all the capabilities that you have by utilizing an SD-WAN solution in the branch. Things like application visibility and steering across multiple transports for resiliency, just as was highlighted is very important. You’re bringing those capabilities into the public cloud environment and centralized configuration management is certainly part of that.

With that, I’ll kind of get into a little bit more detail into the public cloud environment, how do you typically get into a public cloud environment, and then we’ll walk into AWS, some basics of AWS, and I’ll highlight to you how an actual deployment scenario worked for us.

There are a few ways you can actually get to a public cloud provider. Option one, which is the direct connect through a partner. Typically, your partner means your service provider, so you might go to your service provider and ask them, “Hey, can I get a Layer 2 circuit all the way into AWS,” and you are typically doing this from a data center into the AWS environment. You are not typically building the direct connect from your remote branch locations. You’re typically doing it from a data center. You have a direct connect from your data center into AWS, and this can be over an MPLS VPN or just a Layer 2 kind of VPN service. This is one of the best ways to build a guaranteed sort of connection between your on-premise data center and your public cloud environment.

Option two is if you happen to actually have your devices in co-lo facilities. For example, a co-location facility here would be the equivalent of Equinix. If you happen to have your network infrastructure, or even your data center, could be located in an Equinix facility. If you happen to have that, then you can still connect into the AWS environment through a direct connect, and you might be connected from remote locations over the internet. In this specific instance, you can attract traffic to a co-lo facility, and then from there you can go into your infrastructure service provider. Think of it as more of a regionalized connectivity into your infrastructure service provider.

Lastly—it’s purely going over the internet—so essentially wherever your traffic may be, you basically come into AWS through the internet. These are the three primary mechanisms that most enterprises use. In fact, they are the only ones that you essentially get into a public cloud provider.

Having said that, let’s just walk into a little bit of Amazon’s routing infrastructure, because it’s important to understand this as part of the deployment scenario that I will be explaining. In Amazon, a virtual private cloud instance—a VPC—think of it as a mini data center. You are essentially going to end up spinning up your computer infrastructure within this virtual private cloud instance, and there’s a network that actually is a part of the VPC. You have a specific subnet block that’s assigned to it. Within this VPC, you have multiple availability zones.

For instance, a VPC can have multiple availability zones, so you can have multiple regions in AWS. In the US, you have US West, you have US East. Within US West, US East, you can spin up a VPC and within US West and US East, you have multiple availability zones. Availability zones are essentially data centers that exist within a region. You can actually create redundancies such that if any one of the Amazon data centers actually had a failure or if there’s an issue, you still don’t lose connectivity to your workload. That’s an availability zone. This is just kind of setting the stage for how this routing infrastructure is. Also part of AWS, you have a VPC router that allows you to connect between all your instances. This is an important concept as part of our explanation.

Next slide: Direct connect. To kind of explain a little bit more about direct connect, how this works, essentially is you build a Layer 2 connection or a Layer 3 connection to your MPLS provider or your Layer 2 provider, and you essentially build a private connection all the way into Amazon. Your connection terminates on a VGW, or a VPN gateway. In the case of direct connect, what you end up doing is you actually end up building a BGP session between your on-prem router and your AWS router.

Then over that BGP session, you’re exchanging enough information, just the underlying information, to get into the AWS VPC instance. You can also come over this direct connect, access public services. For instance, Amazon differentiates between private VIF—or private virtual interface—which is connected to your own VPC, or your public virtual interface, which connects into Amazon services like Amazon S3, Lamda, and all the other services that actually Amazon offers. This is a kind of just setting the stage as far as direct connect. You kind of need to understand this concept.

Now, let’s actually look at when we were actually talking to a customer. What they went through, the exercise they went through, the transformation that they were doing in their WAN, and how this all played out together with their hybrid WAN strategy. In this specific instance, a customer of ours, primarily utilizing MPLS technology in their WAN for both the branch and the data center locations, it was mostly a hub and spoke topology. You had the branch locations going into the data center, and from the data center they had a direct connect into Amazon. All their traffic from the branch would come in through the data center and go into AWS.

Here’s what the customer wanted. They wanted to augment public transport in all their locations. All other branch locations, they were going to have a combination of MPLS and public internet. At the same time, the data centers would also have MPLS internet connections. They wanted to maintain, as part of their hybrid cloud strategy, they wanted to maintain compliance while adopting this type of transformation. They wanted to maintain control over routing, which is very, very important. I’ll kind of explain this in the next slide. They wanted to ensure that they have control over routing all the way into the VPC, so they can actually steer traffic the way it’s required for their specific deployment scenario. They wanted to extend segmentation from their on-prem data center all the way into the AWS instances as well. They have different departments that exist and this is pretty much the case in all of our customers. In all of our enterprise customers, they have multiple departments and each department has their own requirements or its own set of applications.

You kind of want to isolate those, so you don’t want, for instance, perhaps an attack or compromise situation in one of those departments to actually move into the other parts of your network. Segmentation and isolation, and moving that all the way to your public cloud instance is very, very important. They also wanted to remove the hub and spoke nature of the traffic, because your data center starts to become a bottleneck. If all your traffic moving toward your hybrid cloud is actually coming in through a data center, that essentially end up being a little bit of a bottleneck.

How can we build a more fluid network where your traffic can go directly from the branch into your public cloud instance? These were some of the ones, and then this is actually what the customer ended up deploying. What they ended up doing was they actually took our VH cloud software, so it is available as an AMI—an Amazon Machine Image. They spun up the AMI in their own VPC instance, so they did this in front of their workloads, where the applications are residing. That specific VH cloud instance had two transports, one that was mapped to the Amazon IGW, which is their internet gateway, and another transport was mapped to the VGW, which was VPN gateway, and that’s where your direct connect actually comes in.

On the branch and data center side, every branch and data center had connectivity to both internet and MPLS. With this type of deployment scenario, they were able to actually, from the data center towards the Amazon VPC, they were able to use multiple transports. The green lines indicate the traffic originating from the data center into Amazon, so traffic went over both the direct connect and over the internet connection through the IGW. At the VH cloud instance, they were able to make the decision as far as which traffic should take which path. With this type of topology, you don’t actually depend on Amazon’s own routing table to make those decisions. What you end up doing is you treat your direct connect as an underlay connection. It’s just transport. You treat your internet connection as an underlay. On top of that, you have an overlay, and all your routing actually happens over the overlay. This is extremely important, because now you can control how traffic goes from your data center into the public cloud, how it goes from the branch to the data center, or how it should actually go. Perhaps some of your traffic should actually go to your data center and then eventually to the public cloud instance. The redline indicates traffic going from the branch. In some instances, traffic can go directly into the AWS instance. You don’t have to consider your data center as a bottleneck anymore. With this type of deployment scenario, really your branches may have a better user experience, because they are essentially going directly from the branch over your secure overlay all the way into your private data center that exists in Amazon.

In some cases, they didn’t want traffic to go from the branch, hit the data center first, and then make its way into AWS. It’s entirely possible with this type of architecture. You have built an overlay, you have defined the topology for the routing whichever way you want. In fact, there are some advantages to this approach, as well. In the case of direct connect, there are limits as far as how many routes you can advertise between your data center and your private cloud instance. Amazon limits it to 100 routes. In this type of topology, your LAN-side routing, the subnets that you want to expose between your datacenter, your branches and your public cloud instance, all of that happens over the overlay. Amazon does not see any of that.

The end result, really, is bandwidth augmentation. I’m getting much more bandwidth available at my branch, at my data center, all going towards a public cloud instance. I have more control over the infrastructure that I actually maintain. I have more control from a routing perspective, I have more control from a compliance perspective, and finally, the Viptela solution is really all about operational simplicity. Being able to deploy this in a very simple manner where I don’t have to worry about IPSec configuration, I don’t have to worry about scale, I don’t have to worry about all the other things that come with maintaining point-to-point IPSec tunnels. None of that you have to worry about in our solution. You bring all that flexibility into this type of an architecture.

We’re going to move on to the actual layout of the VPC for this specific customer. What they ended up doing was you have the VH cloud instance here within the VPC. Subnet 1 belongs to one of your transports, and in this case Subnet 1 actually belongs to the internet gateway, and that’s why you have an elastic IP. EIP here refers to an elastic IP. That’s one of your transports, and then you have another transport that belongs to VGW. That belongs to your data center—rather your direct connect going back into your data center.

The third subnet belongs to your management IP. This is for management access to this VH cloud instance. You certainly want to have in him valve mechanism to get to this VH cloud instance that’s residing within your VPC. Again, you want to assign that an elastic IP. Then, finally you have a fourth interface which is what we call a service interface. This is the interface that faces your workload that sits within the data center. The equivalent of an on-premise deployment, this would be your LAN-side interface. Subnet 4 is your LAN-side interface. It resides within the private subnet, and then all of your workloads end up here. What I have shown here is Subnet 5 and 6 are within those subnets. As you increase your workloads, you just instantiate more subnets, and then all you do is you point these subnets, Subnets 5 and 6, toward that service interface. They just have a default route that points towards that service interface. As you can see, I have pointed out Subnet 5 and 6 route cable within AWS. All you do is you have a default route that points towards the ENI or the Ethernet network interface. In Amazon, they call that the network interface for your AMI. Then, there’s a default route going from your transport interface, your subnet one, towards your internet gateway. There’s a default route going from your transport interface, too, towards your direct connect, and that’s it. That’s what you use to set up your entire routing. You don’t have to maintain any type of routing within Amazon. The routing control is entirely within your control or the Viptela solution. It offers quite a bit of flexibility as far as what you can deploy and what you can do.

Just to give you some guidance as far as instances, what types of instances you need to instantiate for these types of deployments, if you’re looking at one transport. I’ve highlighted to you a use case where we have two transports. In the case where you just have one transport, what you end up doing is you need three network interfaces. You need one for transport, one for management, one for service. If your throughput requirement is less than 100 Mbps, a c3.large instance would be more than sufficient. If you have resiliency, which from the poll question it is very, very important, then you end up having to transport interfaces. It means you need four ENI’s, or four network interfaces for the AMI. In that case, you’d go with the c3.xlarge instance. If you need higher capacity, from our perspective, we would recommend a c3.xlarge instance. Essentially, our rule of thumb is with two VCPUs, you can get about 100 Mbps of performance. If you need more than 100 Mbps of performance, we require more than two VCPUs. It’s a linear scale. As you add more VCPUs, you essentially increase your performance. C3.xlarge has four VCPUs so it just allows you to have higher capacity.

That’s kind of the instance guidance. Having said all of that, to sum all of this up from a deployment perspective and best practices, as you can see, what we’ve shown you, we have come up with a way, a topology where you can build any connectivity between your branch, data center and your public cloud workload. Today, we spoke a lot about AWS. This does not have to be limited to AWS. We actually have a solution that works with Microsoft Azure today, and we are working on a solution for Google Cloud as well. You can have multiple cloud providers that can adopt this type of a solution. Visibility for application steering and resiliency—this is very, very important. Again, highlighting the fact that you have complete control over your infrastructure, whether it’s sitting in on-premise or in your data center, you can really use the SD-WAN technology of application steering to determine the performance of links, steer applications the right way, all that comes under your control.

Finally, and most importantly, segmentation and isolation workloads. Being able to extend the segmentation that you have within your data center all the way into a public cloud provider to maintain security and compliance is absolutely important. We feel this is a best practice. With that, I’m actually going to start looking at some of the questions, if we can start to answer some questions.

Rob McBride: We have a couple questions here. One we have already answered through chat and you’ve already kind of addressed. Just to remind everybody, post-event, please visit our website. You’ll see a lot of on-demand webinars that actually address things like where does AWS fit into our strategy, and Azure, and others. To directly answer and reiterate your response here, you know, Viptela’s vEdge solution is applicable to both Amazon today as well as Azure today, and we got some representation of that, and some others are coming in the future.

The next question here that’s been asked, and I’m actually personally curious about it, too, is what’s the max amount of throughput that the vEdge can achieve? For the user that actually put that question in, if you could have a point of clarity on that. I’m going to assume that what’s the max amount of throughput can we achieve with the c3.xlarge instance.

Ariful Huq: What we’ve done, so far with the c3.xlarge with four VCPUs, we can get to about 1 Gbps of performance. The question to highlight here: Can we get the 800 Mbps to 1 Gbps? Yes. That’s entirely possible.

Rob McBride: Thanks. There’s another question about autoscaling rate. I can think of a couple of use cases around that, but just a general path, are we planning or do we have support for c4 instances? We actually support autoscaling, so the increase of the VCPUs directly inside of the instance itself to increase capacity.

Ariful Huq: Absolutely. We can definitely support c4 instances. As far as autoscaling is concerned, I can really see the need for auto scaling from an application perspective. What we’ve seen in the kind of virtual router realm is adding VCPUs. Typically, you can allocate VCPUs. You can pre-allocate VCPUs. You can say, I’m going to pre-allocate four VPCUs, but I end up only using two of them. If I need to use the other two, I can definitely use them. That’s certainly within the realm of possibilities with our solution.

Rob McBride: I encourage the user to ask the question to us. Can you reach out to us directly? You can see we put our contact information in there, but if there’s a specific use case or solution that you’re trying to target and we couldn’t answer, by all means reach out to us and maybe there is something we can work out with them on that.

Here’s a question which I think in general the question is we always talk about the big three; AWS, Azure and GCP. Do we have any other support for any of the other providers? Dimension Data being specifically mentioned here, can you answer that?

Ariful Huq: Absolutely. I am not quite sure the type of cloud provider that Dimension Data uses, but we operate on top of any type of hypervisor technology out there. For instance, we operate on top of KVM, on type of Hyper-V, on top of VMware. Certainly, if Dimension Data is utilizing any of those types of hypervisor technologies, there should be no problem for us to actually interoperate in that type of environment.

Rob McBride: Perfect. The last question is centered around cost. Normally, we don’t discuss our MSRP values directly online here. I would say just look here in the coming weeks or so, you’ll be able to see some things relevant from a marketplace perspective, hopefully. Reach out to us directly if you would like a Bring Your Own License instance. To that poller or to that questioner, you know, reach out to us directly and we can help you on that specifically.

We learned quite a bit today. This is been pretty good, part of a nice little series. Actually, I’m going to pause. I didn’t recognize a couple of other questions. Yes, vEdge cloud is available worldwide from an Amazon perspective, depending on where AWS is available. We do offer it on top of Citrix Xen as well. In fact, Amazon uses a modified version of Xen, so we do have support for Xen as a hypervisor technology, as well.

We heard today about utilizing vEdge inside of the cloud such as AWS, the benefits and values of centralized policy, and an increased route scale in some respects. To extend what you are normally naturally attributed to in your data centers and things and be able to elastically scale that a cross in the cloud itself and showcase how you actually should deploy. As a closer for everybody, I appreciate everybody’s participation here and thank Ariful for his presenting today.

Go on our website. Again, we make these net webinars available on-demand, so you can utilize this post-event. We have a whole bunch of other assets. It’s very important, you can always hear our customers talking about us in various different public forums and some podcasts that we have spinned up for you. If you want a live demo, contact us or for any other specific questions around pricing or architectural discussions, contact us directly either through our webpage, through our Twitter handle for the company, or you can reach out direct to either Ariful or myself and you can see our contact information there both for email as well as for Twitter.

With that, I’d like to thank Ariful and everybody for being on the webinar today and I wish everybody a great day. Thank you.

Ariful Huq: Thank you.

Watch Now