Enabling AWS and Azure Migrations with SD-WAN

Gartner has highlighted that the differences in network services (such as routing, security and application delivery) between internal data centers and IaaS environments cause major issues during cloud migrations. Networking teams face many challenges migrating their workloads to AWS and Azure. These include:

  • Extending secure connectivity from the enterprise to the cloud is operationally complex
  • Cloud traffic flows sub-optimally through the centralized data center/DMZ, instead of directly from the branch or from a regional exit – this affects performance of applications
  • Implementing a unified WAN policy on disparate pieces of cloud infrastructure is difficult

This webinar will dig deeper into the networking problems of cloud migrations. And how Software-Defined WAN (SD-WAN) technology makes this faster and simpler for the networking teams. The following topics will be covered:

  • Extend your WAN seamlessly to AWS and Azure
  • Implement a unified WAN policy across the enterprise and cloud
  • Typical deployment scenarios for AWS and Azure
  • Real customer examples of public cloud migration


Rob: Hi everyone. My name is Rob McBride. I’m with marketing here at Viptela and with me today is Ariful, who is one of our senior product managers. Today we’re really excited to drive along the Future WAN 2017, that’s the WAN Virtual Summit with another exciting webinar.

Today both Ariful and myself are going to give you some best practices in how you can actually extend your WAN into an infrastructure as a service provider like AWS with some specific examples in today’s presentation and obviously, of course, leveraging SD‑WAN to help you accomplish that.

A couple of housekeeping items before I actually kick off the presentation here. We’ve actually uploaded a number of different attachments as well as links for you to leverage so take a look at that portion of your viewing screen, download the presentation for you. There’s also a survey and a few other things for you to leverage after the webinar is done.

Please ask questions. Don’t be afraid. We’ll take questions as they come in to kind of keep this as interactive a session as possible and so I’ll pause Ariful every once in a while when he’s going through some of his technical talk here. And the last bit of notes to everybody here – please be aware that this session is being recorded and will be available for on‑demand access after our webinar has been completed. And so with that I’m going to go ahead and kick this webinar off.

So the first thing I want to talk about here – and again this comes as likely no surprise to anybody that’s actually attending today – but quoting a couple of figures from Gartner here. Cloud has actually doubled, right? There’s been a 2X growth in the last five years as it relates to any sort of cloud service, and that’s actually pretty remarkable and kind of talks to actually why we’re here on this webinar today. To actually talk about how [unintelligible 00:01:52] and how can you actually mitigate that impact utilizing new technologies to kind of ensure that your growth plan as it relates to cloud is done optimally.

And with that the other interesting note that we found from Gartner is that it’s not really just infrastructure teams who are actually making inquiries about cloud, right? There’s a lot of other teams and 80 percent of the inquiries, as it’s stated here in the figure, state that it’s other infrastructure teams who actually talk to them, are actually asking about it.

What this really talks to is the different kinds of cultures and organizational alignments that are happening with new strategies within the IT units. Overall, just enterprise businesses are actually trying to attest and trying to execute on, right, so everybody is kind of coming together and realizing there are benefits to cloud and they’re all wanting to get educated on this.

So I’m going to take a quick question here right now, a question that popped in for a point of clarification, right? The question is about are we only talking about extending at the AWS only or are we also going to talk about Azure? So we’re going to give some very pointed examples about Amazon web services and then through this we’re actually going to pepper some content and some language and some other things you can see, then I’ll give you some other examples related to Azure.

It’s a little bit difficult to kind of cover in a single webinar, both providers but Ariful, in his component will actually address both, okay, but by all means please ask those questions as they come through.

Another piece of this – this is another piece of research from 451 Research and this really talks about adoption as well as triggers. As you can kind of see here from the survey that was done it’s really about asking what the state of hybrid cloud architecture adoptions are in organizations, and I don’t want to read through every little bit of it.

The main highlights of this survey really talked about what the triggers were for the adoption, right? There was a need for elasticity of workloads. There was a need to kind of simplify as well as drive some speed, as well as agility in how operational workloads were actually happening within inside of IT shops.

Getting assets or resources and business components closer to the actual consumers of these particular applications, whether business productivity applications or even consumer‑level applications, and of course, to add a level resiliency and availability associated to what they’re actually doing, okay, so a couple of high level things.

My last bit and part of this presentation here is, we were at Amazon re:Invent 2016 here recently in Las Vegas and part of that show which was huge – 35,000 members there – we took advantage of the amount of attendees there and actually drove a survey to kind of understand, one, the education level of people that are looking at cloud and their awareness of various different aspects as it relates to the wide area network, and so I want to share some of those figures for you that we drove out of that survey.

The first one comes as no surprise, right? You guys are all using multiple providers, whether they’re either PaaS providers or IaaS providers. The overwhelming response was 65 percent of our respondents actually identified they had multiple – basically multiple solution providers for lack of better words there.

And then out of that there was – we asked some questions related to security that more specifically pointed in segmentation and what we found is that 83 percent of our respondents actually identified that segmentation and security were actually one of their top concerns for this. And this kind of makes sense when you look at it that you guys are using multiple clouds and you’re driving multiple different types of workloads. These workloads have different types of priorities as well as different types of focus in what you’re trying to accomplish.

And so a level of segmentation, of course securing it from the variety of different threats that are out there in the world digitally, obviously kind of makes sense. But what it actually points to is really this last piece of information I want to share that came out of the survey, which was you think that segmentation and security are your top concerns.

You’re utilizing multiple different providers and multiple different types of assets to accomplish your mission, but one of the big things that the respondents actually struggled with was how do you actually maintain a consistent policy across all of your assets, right, whether your cloud assets or even your WAN edge elements, and that was actually kind of surprising. That struggle was real, if you will, from the people that participated in that survey.

It’s a part of this presentation today as I kick it off to Ariful here in a second – it’s really going to talk to you about how can you actually solve for that struggle you’re going through as far as policy management, ubiquitous kind of access and ubiquitous management and all the automation aspects that SD‑WAN can kind of deliver and offer to you, but actually extend that into your cloud workload and that’s really the important part for today.

So with that now we’re going to actually kind of kick it off and walk you through how you can ensure you’ve got optimal productivity, right, by extending your actual WAN into the cloud, and with that I’d like to give it over to Ariful.

Ariful: Thanks, Rob. The way we’re going to do this presentation is we’re actually going to start off kind of introducing some of the concepts a little bit. I’m pretty sure a lot of you are familiar with this but since we’re going to use this terminology quite a bit throughout the presentation I figured I’d just start by introducing a little bit of the concepts, spend a couple of slides – two or three slides max – and then we’re going to head right into sort of what is the solution, what are some of the examples?

I’m actually going to walk you through an actual customer that came to us and was going through a WAN transformation journey and then they started looking at including public cloud into that transformation, and we’re going to walk you through the deployment scenario for that specific customer.

And again, as Rob mentioned, I will use AWS as sort of the reference here, but at the same time I’ll actually show you how that is relevant to Azure and our solution worked on both AWS and Azure today. Amazon has a lot of services, right, but really the core of the network services that you want to know about is the Amazon virtual private cloud instance, which is pretty much your private infrastructure service instance running in Amazon.

There is the concept of direct connect, which is really around how do you establish private connectivity into Amazon, and then there is Amazon elastic load balancing. So if you’re an application developer or you’re doing some sort of application development on a public cloud environment you’re really likely to think about a load balancing scheme, so that comes into the picture a little bit.

And finally a little bit of Amazon Route 53, which is actually a DNS service, right? So again, if you’re an application developer you’re probably looking at DNS at some point, right, so those are the four core aspects of a public cloud service that we’ll focus on today.

All right, so to kind of peel the onion a little bit we need to show you how is it all playing out on an Amazon infrastructure? So in the case of any public cloud provider you’ve got multiple regions, right? So for instance, in the case of Amazon you’ve got US West, US East; they just recently announced a region in Ohio, and then they have other regions internationally as well, and the same case for Azure.

In fact Azure actually has a few more regions compared to Amazon Web Services. The regions are essentially where your services would reside. Now within that region you’ve got a concept of an availability zone, so an availability zone really is redundancy. So typically within a region you’ve got multiple data centers. Each date center would consist of an availability zone, and most customers what they typically do is they write their applications or deploy their applications such that it’s deployed in a redundant manner. So you’re going to develop and deploy your applications across multiple availability zones to get that high availability.

There’s a concept of a virtual private cloud instance, which I mentioned. Essentially that’s where your applications reside. It’s your contained data center, right? This is your data center in the cloud, right, so that’s the VPC. A VPC can span multiple availability zones, right, so you get redundancy when you deploy your applications.

And then you have a concept of a subnet, so a subnet really resides within a VPC and a subnet is essentially what you would use to put your [in‑situ] instances in, so you carve out a number of subnets within a VPC and that’s where your Amazon instances would actually reside – a very similar concept in the case of Azure as well.

Now double click a little bit more – within a subnet you’ve got a routing table, so essentially what you’re doing at that point is determining how your instance is going to either talk to other instances within a VPC or perhaps how should it exit this VPC, so you actually determine that through a routing table. You define rules, whether it be a default route or other types of routes that you can define.

And then there is a security aspect to this, so when you deploy an instance in Amazon you’re probably going to look at an ACL that’s more of a subnet construct or you can do a security group, which is essentially an instance construct. So you can define and say only specific types of traffic, only specific port numbers, only specific source destination prefixes can access this application. Those are all things you would define within a security group. So very layered structure and this is very common across all our public cloud providers, but something that you should be aware of even when we’re talking about the networking aspects of the solution.

All right, so again just to highlight a little bit more. Within the virtual private cloud instance what I’m showing you here is the ability to substantiate multiple subnets. In the case of Amazon they have a virtual router so every subnet actually is able to talk to every other subnet utilizing this virtual router, right? So you don’t actually have to build these routing tables by yourself; Amazon actually does that for you, so any subnet within a VPC can talk to every other subnet. Now when you want to leave the VPC that’s when you actually start building some custom route tables and we’ll kind of walk you through that as well.

All right, so now that we’ve kind of introduced some of the principle concepts of the public cloud infrastructure let’s talk about what are the common mechanisms to actually get to a public cloud, right? So there are two very common mechanisms – one is you will hear this terminology during the presentation – it’s called a direct connect, right? Amazon calls it direct connect; Azure calls it express route, but essentially it boils down to an SLA‑driven connectivity from your data center into a public cloud instance, right, and most instances this is done from a data center.

You’re not going to build a direct connect from a branch into a public cloud instance because there is a cost associated to a direct connect, right? Now, there are a couple of ways you can get to direct connect. You can go through an MPLS carrier, so ATT & Verizon for instance in North America offer that service. ATT calls it NetBond, Verizon calls it SCI.

Now there are other ways to get to the public cloud provider as well. You can get it through the internet, which is probably one of the most common mechanism, right? And really the reason for looking at internet is it gives you any‑to‑any connectivity, right? So getting from branch to a public cloud instance you probably want to go over the internet because it just gives you a much more flexible mechanism to onboard that traffic as fast as possible to a public cloud provider.

Now there’s a third mechanism and this is really related to customers that already have devices co‑located in co‑location facilities. So for instance if you’re in an Equinix facility or you’re in a Telex facility what happens is in Equinix and Telex there is a direct connect into a public cloud instance. So if you’ve already got infrastructure in an Equinix facility what you can do is just hop into one of those public cloud providers from there.

That is just a specific use case if you’ve already built your infrastructure on those types of platforms. But these are all the mechanisms and you’ll kind of see why we’re talking about this because this plays into the connectivity aspect of getting into the public cloud.

Okay, one last slide on direct connect. So direct connect essentially builds a private connection through an MPLS carrier. It can be point to point or it can even be through your MPLS connection. So if you buy an VPN service from an MPLS carrier your MPLS carrier is going to actually have an MPLS PE co‑located with Amazon at that point. Within that VRS you’ll have a VLAN that extends all the way into your VPC instance.

I lands in VPC to what they call a private virtual interface, right? Very similar concepts even in express route, and if you want to use that direct connect to access public services, like for instance Amazon S3 or other types of services that Amazon offers, that’s offered through a public virtual interface as well. So again, you can connect to not just your VPC but again to public cloud services, or rather public services offered on Amazon through a direct connect as well.

So just to show you the terminology differences between Azure and AWS – so you’ve got the AWS VPC; Azure calls it VNET. AWS calls it availability zone for redundancy; Azure calls it availability set. There’s a direct connect in AWS; Azure calls it express route. Internet gateway is pretty much the same. A VGW, VPN gateway; yeah, that’s pretty much the same as well, and then both services have an elastic load balancer as well. So this is just to make sure that everybody understands that terminology is pretty much, so everything we talk about in the presentation is relevant to both providers.

All right, so now let’s talk about hybrid cloud today and what are some of the challenges in actually deploying hybrid cloud today. So if you’re doing hybrid cloud today most likely you have a number of regions, so you could have a couple of regions within the US, maybe one region in Europe, and within each one of these regions you’ve got an infrastructure service instance.

And the way typically most customers are connecting into these instances is through point to point IPSec tunnels. One example is we were talking to a large bank and what they were doing was they had 150 VPC instances across multiple regions, and what they would do is every single one of those infrastructure or services instances, they would have a VPN gateway and they would build point to point IPSec tunnels from every single one of those instances back into their data center.

It can be a pretty complex network just to build that connectivity into the public cloud and this is one of the points where customers have mentioned; the scale, right? How can I maintain all these point to point connections? How can I scale better? How can I do this is a more automated fashion, right? That’s one of the first challenges that we hear from our customers.

Second – isolation and security. Rob mentioned this and it showed up in the survey results, right? So if you’ve got some sort of segmentation or you’re doing some sort of data center segmentation, today how do you extend that into your public cloud environment? How do you ensure that the workloads that are residing within a VPC can be segmented such that only certain business units within your organization can actually access those workloads so that you have no lateral movement of traffic between these workloads, right? Or if one of those workloads is compromised in one business unit it doesn’t render the other workloads actually compromised as well, so isolation and security become important and we’ll talk about how we can solve that problem.

Resilient access, right? So just like in the branch a lot of our customers are moving towards multiple carriers, multiple transport. How do you establish those same principles in a public cloud environment? If I have a direct connect going into my public cloud instance and there is a [brown‑eyed] situation on my MPLS carrier, how do I ensure I still have connectivity to that public cloud instance, whether it be through internet or some other mechanism? I need access to my workloads if there is a failure in one of my transport providers, so resilient access is important.

Inter‑region peering – so a lot of the customers we talk to, they don’t just deploy an instance in one region; they’re typically doing geographic redundancy, right? This is a redundancy mechanism as well as proximity. You want to make sure that the workload you spin up are closest to who are the consumers of that workload, right, so you can onboard that traffic as fast as possible.

And when you start doing inter‑region – or rather spinning up your instances in multiple regions how do you connect these regions together? That becomes a problem. Most crowd providers solve the problem of intra‑region, so they’ll give you ways to connect your instances within a specific region, but when you start spanning multiple regions it’s not a problem they solve, so that’s a problem that the customer has to solve.

Centralized monitoring and management – again, a problem statement that we heard at AWS re:Invent conference, right? If you’ve got devices sitting in your branch, you’ve got devices sitting in your data center and now you’re bringing in workloads in Amazon and Azure back into your enterprise environment, how do you ensure the policies that you have in your branch and data center, are applicable to the instances residing in your public cloud environment? You need a centralized management and monitoring solution to do that.

And lastly, application visibility and steering, right? There is really no good way of doing per application steering from Amazon or Azure today. They don’t offer a DPI capability so typically they rely on an appliance that’s sitting there that allows you to give – that gives you more visibility and is able to take care of some of these capabilities.

All right, so having highlighted some of the challenges let’s do a quick introduction of the Viptela solution. So really at Viptela what we’re offering is distributed forwarding elements which are our CPEs and it can be a virtual instance or a physical instance. Physical instances typically in branch, campus, data centers, small home office locations; virtual instances in cloud environments, right?

So completely a distributed forwarding element and centralized control and the control elements are typically residing in a public cloud provider or it can even reside in your own environment. If you feel the controllers cannot reside in a public cloud instance I need to host them; you can make that happen as well.

So now you’ve centralized control, distributing forwarding elements and on top of this platform you can then build your services, right? You can build your VPN segmentation. You can build your security policies. You can do QoS at the branch. So really what we’re giving you is the ability to build a secure IP fabric across all your connections, whether it be in the branch, data center or in the cloud and centrally manage that.

So our solution and our recommendation, what we’ve been talking to customers, it’s very simple. What we’re saying is extend the Viptela secure fabric all the way into the cloud and the way we want to do this is essentially instantiating our instance in the public cloud provider. So in the case of Amazon we have an Amazon machine image; in the case of Azure we have a VHD image. You can instantiate these images in those public cloud provider locations, right, in the infrastructure that you instantiated there.

And then on top of that you have all the existing capabilities of an SD‑WAN router. You have hybrid transport, you have segmentation, application visibility and steering, right? Note, that you can really drive the topology here, right? We don’t have a centralized gateway that you have to go through to get to a public cloud provider.

We’re saying you put in the devices wherever you need them. You need a device in the branch, you need a device in the data center, you need a device in the public cloud instance; you define the topology. You define the connectivity model. There is no centralized gateway for you to hop into first before getting into a public cloud provider. This is one of the things that our customers really like about our solution. They really have a lot of flexibility in building out any type of topology they want.

All right, so a lot of information so far on our solution, what the cloud providers do, and now I’m going to talk about some customer deployment scenarios, so this is an example of a customer that actually came to us and really this is an interesting conversation with this specific customer. For this customer the application development team actually came to us and said, “Hey, we’re going to roll out new applications and we know our network, our WAN is not ready for this. How do we make this change? How do we transform our WAN so that the applications we want to develop and we want to transform our business.” For that to happen, network is not a hindrance.

So a lot of times vendors like ourselves talk to the networking team. We hardly ever talk to the application team, but now it’s actually changing. The application team realizes that in order for them to actually deploy a new application they really need to go through a network transformation journey. So in this case what this specific customer was doing, they had all their branches connected through an MPLS network that would connect into a data center, so it was a hub and spoke model. From the data center they would have a direct connect going into Amazon. So in this specific instance the customer was using Amazon.

So the model was hub and spoke and then from the data center would go into Amazon. What they really wanted to do was augment public transport, number one, so all the other locations should have internet connectivity. They wanted to maintain compliance while adopting hybrid cloud. They wanted to maintain control of routing into AWS VPCs. Just because I’m adopting an instance in a public cloud provider doesn’t mean I should not be able to control my routing, right? I should have control over what traffic should go where and how it should get there.

Extend data center segmentation – so if I’m doing segmentation to the [XLAN] or whatever overlay technology I’m using in my data center, I should be able to extend that all the way into my public cloud provider. And finally remove this hub and spoke nature of connectivity into AWS, because it causes problems. The data center becomes a choke point for you.

All right, so let’s take a look at the after scenario. What did they do and how it actually helped them, right? So what they did was in the branches they had a combination of internet and MPLS – they still wanted to keep MPLS, right? Really, our story is not to tell our customers to get rid of MPLS; MPLS serves a purpose. MPLS gives you SLA‑driven connectivity and if you require that by all means, there’s no problem in using an MPLS connection.

But the augmentation of MPLS with internet, that’s really where a lot of the benefits come in, so this customer ended up doing internet, MPLS across branch and data center locations and they actually spun up our vEdge cloud instances. So this a virtual network function, that was an AMI, into their Amazon region, so they had a couple of regions. They instantiated our AMI in those regions.

They had multiple ways to get to those regions. They had the internet gateway and they had a VGW or VPN gateway, and typically if you’re going over direct connect you’re going to end up going over your VPN gateway, right? So now really the benefit here is the branch can connect directly into the cloud or you can have the option of the branch going through the data center and then getting to the cloud.

There a couple of reason why you might want to go through the data center. If you have a security [posture] a specific application needs to go through a specific – a specific application needs to go through a security posture, maybe a firewall service, right; then you might want to send the traffic through the data center. We have the ability to do that. You can attract the traffic through the data center and then the traffic can make its way to the public cloud instance.

And the connectivity between branch directly to the public cloud really helped this specific customer because they could onboard that traffic as fast as possible. And one of the interesting things this customer was doing was they were actually sending a lot of the data from their branch locations. They were actually monitoring customers and seeing the experience of their customers. They were sending this information to a data lake in the public cloud and they were actually running analytics on that data.

Why would that data need to go through my data center and then go to the public cloud? I can send that data directly to my public cloud over an internet service. That was a huge benefit to this specific customer. So what you see here is the customer has driven the topology, they have been able to augment transport so they got a bandwidth boost and they’re able to connect very easily into the public cloud service utilizing this technology, right?

So I’m going to pause – so really the end result here was bandwidth augmentation, more control and operational simplicity. They didn’t have to build these point to point IPSec tunnels. Our solution took care of the secure IP fabric across all types of transport from the public cloud into the branch and datacenter. I’m going to pause for a moment here before I get into the next slide. I think, Rob, you’ve got a couple of questions you want to take up?

Rob: Yeah; you throw a lot of information out there in kind of a [concise] format and sometimes it’s kind of a little difficult for me to kind of stop you so I can kind of help some folks who are asking questions, right? Guys, keep asking question; there are a number of them right now and the reason I wanted to pause is because I think it’s good for us to address them now as they’re relevant to kind of the custom deployment you’re actually talking about.

Ariful: Absolutely.

Rob: The first one I want to go through here is very specific to your drawing – in this drawing is the IGW connection to the public AWS subnets and the transport to VGW access, is it for the private AWS subnet?

Ariful: So in the drawing is the IGW connection in the public – okay. So there’s a couple of ways you can do this. In fact, if you just hold onto this question for a second I actually will go through – there’s a way for you to actually – you can deploy the vEdge cloud instance within a VPC where the application resides, or you can create something called a Hub VPC, and a Hub VPC allows you to share your IGW and VGW connections across multiple application VPCs or spoke VPCs. So that’s a deployment mechanism maybe that will address your question so give me a couple of slides and I will address that question.

Rob: Yeah; well, let’s hold on. I’ll remind you to kind of make sure you’re addressing that point but I do believe obviously your content there has got that. Part of this from this customer deployment – I believe that some element of it was connecting or solving for the problem [unintelligible 00:30:17], how do I connect multiple VPCs across AWS regions? So how can you actually solve that is really the person’s question here.

Ariful: Absolutely; so the fact that we actually have vEdge cloud instance residing within an AWS region, and you’ve got that across multiple regions, allows you to essentially build a secure fabric or secure connectivity across any one of your regions, right?

So now the fact that you’ve built that connectivity utilizing our solution you don’t have to worry about building point to point IPSec tunnels. We take care of that for you. And so now you have connectivity between all your regions utilizing that secure overlay, right, and that overlay can be over internet, it can be over MPLS.

Rob: Perfect; so I’m going to take one question and then – listen, keep going guys, right, and I’ll pause Ariful as we move in transition forward, but there’s one question here that actually points to a previous slide and so I’ll pop it up for you so you actually see it here in a second. The question is on slide 13 it indicates that the [unintelligible 00:31:17] will provide a turnkey, fully managed network for the customer’s internal as well as connectivity to the public crowd provider.

I think really what the person is asking here is can we give a point of clarity really on that and do we actually provide full management on network, including config, deployment change and incident management? So I believe that the actual answer here is we provide elements of that from slide 13 that you saw. We do provide our own cloud services as relates to hosting the management components of our SD‑WAN solution, but we also partner with a number of different providers. Verizon is one notable one that actually can provide managed services and from there that’s where a point of consolidated incident management, configuration policy change and things are actually implemented, and they actually provide that as a kind of single source or single touch to address your entire enterprise WAN needs. To address the question specifically – we as Viptela don’t provide that as a full‑blown managed service; it’s [overtalking].

Ariful: Exactly. To kind of reiterate that – we are not a managed service provider; we are a technology provider, right, so what we do is give you the technology at the branch, at the data center and it can reside in a public cloud instance, and we allow you to build connectivity across any of those endpoints and we have a centralized controller that takes care of policy management, configuration management, visibility aspects of it, so we are a technology provider.

Rob: Absolutely; and so I’m going to take one more question before I ask you to continue moving your presentation. It just sort of helps address some potential ambiguity with other things that are moving forward here. So obviously direct connect has got its competitor express route, right; the question is specific to direct connect but it’s relevant to both. What happens to my AWS direct connect after I roll out Viptela?

Ariful: Good question. So really what we’ve seen customers do is they actually still maintain their direct connect. As I mentioned there are a number of ways to get to AWS, right? You can get to AWS through the internet; that means it’s through their internet gateway. There is a way to get to AWS through a private connection, which is your direct connect. A lot of our customers, what they end up doing is they still keep their direct connect, right? If they have specific SLAs they need to – or bandwidth requirements – so for instance in the case of any sort of private connection going into AWS they also give you a bandwidth guarantee. I have 100 Megs of guaranteed bandwidth going into AWS or I have 1GB guaranteed bandwidth going into AWS.

So a lot of our customers, when they make this transformation they still keep the direct connect because it has those benefits, but really the internet augmentation gives that any‑to‑any connectivity. So to answer your question you can keep your direct connect to connect your data center all the way to the public cloud, your branches can connect directly into the public cloud or they can come in through the data center. There are multiple ways to get to that branch.

All right, so I’m going to peel the onion a little bit more and we’ve talked about this specific customer. How they ended up deploying their solution but we really didn’t talk about redundancy. So if you’re a customer, you’ve deployed your application in a redundant fashion in a data center, you’ve made sure that gateways are redundant so you if have a data center gateway they are redundant, right?

So how do you do this in a public cloud instance? So what I’m sure you hear is one mechanism to do this, and in fact this will address a couple of question that even came up. So we’ve got this idea of using a spoke VPC; so spoke VPCs are where your applications reside. We don’t reside in that spoke VPC so you deploy your applications within that spoke VPC.

And then there’s a concept of a Hub VPC and a Hub VPC is shared across multiple spokes. Again, it’s a layered architecture. Spokes are where your applications reside; hub is where your gateway resides and the gateway VPC is where your internet connectivity comes in. it’s where your direct connect comes in as well.

So in talking to a lot of customers they’ve repeatedly asked us, “How do we share our direct connect across multiple spoke VPCs or rather multiple VPCs in a given region?” If you deploy a vEdge instance in every single one of your application VPCs it becomes hard to share that direct connect, so with this model you can actually share that direct connect across multiple VPCs. Again, your internet connectivity can also be shared across multiple VPCs.

And with this architecture redundancy is very important, right? So when you actually deploy an instance or your own appliance in AWS you need to take care of that redundancy for that appliance. And what I mean by this is AWS has many components. AWS has this IGW component, VGW component. They have this concept of a virtual router that resides within the VPC; very similar to Azure as well.

They have built in redundancy for all those components so when we introduce a new component into AWS you have to manage redundancy for that. As you can imagine, with this type of architecture you’ve taken care of redundancy as well so what happens is the spoke VPCs actually build an overlay to the Hub VPC where the vEdge gateways reside, and the only way to build this overlay today in Amazon, because it’s a completely [unintelligible 00:36:55] environment, is to use a standard IPSec tunnel.

So you build an IPSec tunnel from a VGW that resides – that is within the spoke VPC, into the Hub VPC, vEdge gateway and so you have that overlay, but you run VGP over that overlay so you can automatically learn routes within your spoke VPC into your vEdge gateway. And now, if there’s a failure of vEdge gateway VGP will automatically trigger the spoke VPC, VGW to say, okay, that specific gateway router has gone down, I should now revert the traffic to the redundant vEdge gateway, right?

So this type of an architecture takes care of redundancy in a very slick fashion. We’re using dynamic routing to actually learn routes. We’re using dynamic routing to figure out if the gateway even exists, right, so this is one of the big advantages. So you can share your transport connections, you can leverage all the AWS components for redundancy today. The redundancy time is actually very fast utilizing this solution. It really depends on your VGB timers.

One of the things you cannot do with this is the segmentation that you end up deploying has to be at a VPC level and that might be okay, so you deploy applications within a specific VPC and you segment that such that if you have other applications that need access by other users you just deploy a different VPC for that. The segmentation is done at VPC level.

Now the second approach to solving this problem is putting a vEdge gateway within an application VPC or co‑hosting it with your application instances. Now, if you go down this path what you end up doing is putting a vEdge gateway in every single one of your availability zones, right, so you have two vEdge gateways or our VPN gateway in every single one of your availability zones.

Now, one of the pros to this approach is you no longer have that one point to point IPSec tunnel that you have to establish between this spoke and the Hub VPC. You can just make use of our overlay technology to build the full mesh connectivity. But there are a number of things that we’ve seen customers tell us here with this type of deployment is “I can’t share my AGW connectivity. I can’t share my VGW connectivity. More importantly my VGW or my direct connectivity across multiple VPCs.”

Also from a redundancy perspective there are a couple of things you have to take care of. If your vEdge gateway goes away in a specific availability zone you need to be able to program the VPC route table to say that that vEdge gateway has gone away and I now need to set my next [top] to my redundancy edge gateway. So there’s an API call you have to make from that vEdge gateway into AWS to make sure that route change is made, right? So this is an important concept; again, it’s just one of the approaches to solving this problem.

How about a load balancer? So just to quickly highlight. If you’ve got a load balancer – let’s take the example of a hub and spoke VPC – you would deploy a load balancer just like you would today. You put your application load balancer – the way Amazon recommends we do this is you deploy the application load balancer per availability zone. You do a cross zone load balancing. No problems there, right?

So what we just talked about, the concepts we introduced, the deployment models we introduced, does not disrupt the way you do load balancing because this is an important construct for you when you deploy your application. So that’s just to make sure everybody understands that the application and load balancer and how it’s deployed does not change, especially with the hub and spoke model that we talked about.

So from a pricing model perspective really the way we price the solution is you have the cost of instantiating and instance in a public cloud provider and there are two models to pay for the software. Most public cloud providers have a bring‑your‑own license model or there’s an annual or hourly subscription model that you can pay to a marketplace, like for instance Amazon marketplace has an hourly or annual subscription model, or you can buy the license directly from Viptela.

What we’re going off as a go‑to‑market model to start off with is it’s a BYOL model, so it’s bring‑your‑own license model. You buy the software from us, you instantiate that software on a public cloud instance and you pay for that public cloud instance. That is the operational cost of your vEdge cloud. There is nothing else. So your controllers and everything else, as part of our Viptela ecosystem comes together with that software subscription license, right? You don’t have to pay for the controllers separately. So buy the software license from us, instantiate it in Amazon and pay for that Amazon instance or instantiate it in Azure, pay for that Azure instance. That’s the go‑to‑market model.

So final slide – some recommendations as far as instances. We can do up to 2GB of traffic utilizing our vEdge cloud or vEdge VNF, right, so I’ve just provided this as a reference. I’m not going to go through every single one of these, but at sub‑100Megs you can go with the t2.medium instance in AWS; if you’re going up to 500Megs you can go with the c3.large instance or c4.large instance.

Most of the instances give you at least three [ENIs] or network interfaces and that’s more than sufficient to instantiate our solution, right? So this is just a quick recommendation if you’re thinking about deploying this; what do I need to do? Reference this table and it will give you all that information.

All right, so really concluding remarks to kind of conclude the presentation. What are the Viptela benefits? Any‑to‑any connectivity between branch, data center, public cloud across multiple regions, so it’s really better cloud onramp. What does that mean to you as an enterprise? A better user experience, right? Reduced operational complexity, which means reduced – a lower TCO, a lower cost of ownership. You have a very simple solution to deploy, a very simple solution to manage and this will help you reduce your overall TCO.

We give you the visibility for applications during the resiliency. The fact that we have an instance running in Amazon means that you can get [Nepla] records, you can use DPI, do application recognition, all of the capabilities you have over [unintelligible 00:43:40] exist within the cloud and so now you have – you can take advantage of those capabilities.

And finally from a security perspective segmentation and isolation for security and compliance – I cannot highlight that more importantly. Really that helps you build segmentation all the way from the data center branch into the cloud, right? So that’s closing remarks and we’re going to take some questions now, Rob. This is our final slide so I’m going to hand it over to Rob here to start looking at some of the questions.

Rob: Thanks, Ariful. Yeah; there are quite a number of questions here. We’ve got a couple of minutes so I think we can address a few of them. The truth is I think and I’m hoping that some of these questions are already answered in kind of what you talked about here. One of the ones I want to touch on and I know that we didn’t really pick on really is focusing more on a channel partner question. The question here is do we have to go through Verizon, or any carrier for that matter, to get access to our Viptela solution? Most of the customers want to use the best of breed approach for connectivity. We can’t really do that if customers actually have to go to Verizon or any other carrier, so a quick question to you on that.

Ariful: Absolutely. Verizon offers the managed service with our solution, right? So if you are a customer that relies on managed services certainly you can consume the Viptela service through that managed service, but absolutely. You can buy the Viptela solution through a channel or directly from us and you can deploy it yourself as – if you are a DIY shop, you do your network infrastructure yourself, you kind of want to be independent; you want to use the best of breed technology. Absolutely; we agree with that, right?

If you’re looking a best of breed WAN technology you can certainly go with us. If you’re looking at other solutions certainly you can deploy them yourself as well. So in summary, you can consume the Viptela services through a managed service if that’s your path to managing a network or you can buy the technology directly from us or through a channel.

Rob: Thanks. I’m going to spitfire a couple of and kind of give some quick answers to these. One is about [unintelligible 00:45:46] resources of the point are vEdge in a multi‑VPC environment. I’d say always continuously look at our documentation area on our website, as well as if you’re a customer locate docs at viptela.com as far as your point of access and resources there, and there’s a lot of updates as far as what to do with vEdge in a multi‑VPC or a cloud kind of environment there.

The question related do I get more application visibility by putting vEdge cloud devices [unintelligible 00:46:16]. I think the simple answer to you there is, yes. As a result of the DPI capabilities that get placed and vEdge being in there, in conjunction with AWS, V‑routers and gateways, you have an ability now to attach certain kinds of policies, utilize the vEdge devices – and correct me if I’m wrong here, Ariful – vEdge is here as kind of like the default router for all the workloads that are sitting behind it, and as a result of that we’re looking into those packets, doing inspections that can actually attach policies to those.

Ariful: That’s absolutely right.

Rob: Really giving you visibility in the performance of it, as well as being able to make [unintelligible 00:46:54] decisions associated with it.

Ariful: That’s absolutely right.

Rob: We’re not going to get to all the questions, folks but I’m going to go through maybe one or two more here as far as I think is good. There is one question here that I thought was kind of interesting and it talks to some of our value prop. Moving enterprise business, critical applications to the cloud it obviously increase latency [jitter] and package drop risk levels, so what do we do from the overall WAN overlay solution we provide that’s in the cloud? How do we help mitigate that?

Ariful: That’s a great question, right? So again, how do you mitigate latency? How do mitigate packet loss? It really boils down to the ability to onramp that traffic as fast as possible. So again, instead of hair‑pinning your traffic through a data center, you can send that traffic directly from your branch to the public cloud through the internet. Internet gives you any‑to‑any connectivity but it doesn’t give you the bandwidth guarantee. But again, if your goal is to get there as fast as possible you can certainly do that over the internet.

And to address the question of packet loss and other transport failure mechanisms, those are taken care of with our solution. We have the ability to monitor that transport, figure out the characteristics of that transport and then steer the traffic on a different path if the transport does not meet the requirements of that application, right? That’s the value proposition of our solution.

Rob: Perfect; and I’m going to close out the webinar now, guys. Thank you very much for attending. As you see on the screen we have a number of other sessions we’d like for you to attend as part of the Future WAN ’17, SD‑WAN Virtual Summit. As far as all the questions we didn’t answer we’ll actually post up a transcript through our website and we’ll actually address those questions offline as this webinar will be available for on‑demand access afterwards. And with that thank you, Ariful, for sitting here with me and presenting some great content, and thank you everybody for attending.

Ariful: Thanks, everybody.

Watch Now