Viptela is now part of Cisco.

SD-WAN Migration Best Practices

In SD-WAN migrations, how do enterprises identify the pilot sites? And how do they plan the large-scale rollouts. There are important planning considerations in both small and large scale migrations. This webinar will explain how different enterprises have executed their rollouts and the lessons learned.

The Viptela Software-Defined WAN (SD-WAN) platform delivers an agile, cloud-ready network infrastructure. The major benefits of the SD-WAN technology is a single overlay architecture with centralized policy & management. Viptela SD-WAN has been deployed by the largest banks, retailers, conglomerates, healthcare providers and insurance companies.  As more enterprises deploy SD-WAN at large scale, network administrators are eager to know the best practices for a step-wise migration to an SD-WAN solution.

SD-WAN Webinar Highlights

  • Case studies on how different enterprises have approached their SD-WAN migration
  • Planning your SD-WAN migration with the first five pilot sites
  • Executing a large scale migration of ten, hundred or thousands of sites based on templates and centralized policy
  • Role of the single overlay architecture over MPLS, Broadband & LTE in a seamless WAN transformation
  • Live demo of enabling SD-WAN

Download Slides Here


Ramesh Prabagaran
Ramesh Prabagaran VP of Product Management and Marketing, Viptela

Ramesh Prabagaran has a track record of bringing disruptive and innovative networking products to market focused on carriers and enterprises. Most recently at Juniper Networks, he was a senior product line manager establishing the product vision for enterprise and datacenter routing products, and WAN-focused solutions for Fortune 100 companies.

David Klebanov Director of Technical Marketing, Viptela

David has more than 15 years of diverse industry experience architecting and deploying complex network environments. David sets strategic direction for industry-leading network platforms, which transform the world of wide area communications for enterprises and service providers alike.


Ramesh: … for many of our customers. You need to be able to think through the overall process of transitioning from the current state to the end state, which is essentially what all the vendors in the space portray. There are a lot of steps there to our customers – we have about 80 and counting of them; 25 of them, at least, in the Fortune 500 league. Many of those customers have gone through a certain series of steps. So what we thought we would do in this webinar is encapsulate and capture many of the best practices going through those deployments and talk about how we can transition from the network of today to the SD-WAN architecture and the network of tomorrow.

So let’s start off by talking about some of the basics. So in order to go SD-WAN – and since many who are attending this webinar, I’ll skip through the pleasantries of what the value prop of SD-WAN is, or what the value prop of Viptela technology is. Let’s talk about, how do you go from the network of today to and SD-WAN architecture?

The first fundamental step that you will have to take – and it’s an important decision to make – is whether you want to go hybrid or whether you want to transition from one type of technology to another. When I say hybrid, it’s essentially a customer using a combination of private MPLS, public broadband, and 4G LTE, for various reasons – cost, accessibility, bandwidth and also redundancy – or, do you want to go all broadband and maybe augment that with 4G LTE?

Really there’s no right answer to this question. It purely is a function of risk appetite. It’s a function of availability of circuits at a certain location and a function of what kind of SLA and what kind of redundancy do you want in your architecture? We’ve seen certainly some of the highly regulated industries move toward broadband. We have also seen not so regulated industries still stay with a hybrid option of MPLS and broadband. So that’s the first step.

The second step is really whether you want to replace an existing device or whether you want to coexist with an existing device. What I mean by that is, today the wide area typically involves a CPE connected into a private MPLS and, potentially, another CPE or the same CPE connected into internet, optionally. So the question to ask is whether you want to leave that infrastructure intact, whether you want to augment that infrastructure with an additional offering; or do you want to have an overlay on top of the existing infrastructure.

So David will walk through multiple deployment options whereby you can have a CPE at the location continue to function the way it is, and then augment that offering with a Viptela solution, with an overlay solution; or you can just replace the existing one and then go completely full on with the overlay solution. Once again, there are no right or wrong answers here. It really purely is a question of, what kind of a fallback do you want; what kind of risk do you want to take; and also what kind of value do you get and how quickly you get it, as well.

The third important question, along the lines of what I just spoke about, is whether you want to go all in or whether you want to ease into the solution. Typically, if you have a network that’s about 50 sites, I think you can make a fairly easy decision: Do you want to convert the entire infrastructure in one shot, or do you want to ease in? When it starts to get to 1000 sites, 2000 sites, 10,000 sites and so forth, then you really want to have a phased deployment model. This is not new. This is how networks have been built over the last decade.

The question to ask is – there is going to be a period of time in between when you are going to have an SD-WAN portion of the network, and you’re going to have a traditional network which coexists; which is still fundamentally okay. The question to ask is, how do you want to make sure the two islands talk to each other. Do you still want to have connectivity across all the sites? Do you want to have a bridge in between that connects the two islands? Those are the two fundamental questions to ask.

Closely associated with all of this is, how are you consuming the service. Are you consuming this whole SD-WAN technology as a do it yourself model, where you have expertise in in house, in architecture, operations, and the associated planning; or do you want to consume this a managed service from, let’s say, a [unintelligible 00:05:21] a carrier? Both options are equally good. Once again it depends on how you’re organizationally aligned, how you want to consume the technology, and what kind of personnel you want to have in house.

As you go through the circuit piece, the decision on whether the CPE needs to remain as is or whether you replace the CPE, and how long you want to have two networks, I find the big question that we’ve seen many customers ask is, now, what do I do about [unintelligible 00:05:53]. The trigger point for SD-WAN is, essentially, it’s a high-bandwidth application that’s requiring a refresh in the circuit. Hence customers are looking for SD-WAN solutions, or they say, I need to optimize my wide area for cloud; whether it’s for [unintelligible] service, or for SAS.

Especially with highly transactional applications, like Office 365, the importance of designating certain points in the network as exit point, cloud exit points, becomes really, really important. David will walk through the various architectures that we have and various options that you have as well in order to adopt that. But fundamentally, this is an important question that you need to answer before you embark on this journey as well.

Once you go through all of this, then there is a big question of organizational alignment. Not to scare anyone here, but organizational alignment here is, we are fundamentally moving functionality around. What used to be traditionally inside a CPE now has moved to a few other locations in the network. So to give you an example, all SD-WAN solutions fundamentally need to provide the networking piece and also some element of network security; which means there needs to be a [unintelligible 00:07:21] function with respect to security peers. So there needs to be some level of organizational alignment there.

There are also policy functions that cut across multiple organizations within the company; and more importantly, as you start to develop capabilities that are highly SD-WAN provided, you want make use of some of the automation capabilities, some of the [unintelligible 00:07:50] capabilities and so forth. So what we’ve seen customers do is actually embark on [unintelligible] ops or cloud ops but have some kind of network operations team. It could be a team of one, or it could be a team of people; but the responsibilities of that team are essentially to highly automate the infrastructure.

Automation is not just configuration automation but rather the workflow. If I can get visibility around lost latency, jitter, and so forth, from the underlying network, I also get application visibility. How do I tie all these things together, beyond what just the native network can provide? So the SD-WAN portion of the network gives you enough intelligence. How do I tie this now to, maybe, a security appliance? How do I tie this, now, to the data center piece and maintain segmentation into it?

So all of those really tie back into some level of automation. Whether it’s in the form of scripting, or whether it’s a form of tool that you use on top that can automate workflow, becomes a pretty important decision. An alignment is required to be successful. We’ve seen this as a common theme across, again, the 80 plus customers that have deployed this technology, and especially as we start to talk about large deployments.

We have a couple of deployments that have gone way past 1000 sites, and we see these kinds of principles play really well. It helps ease the overall transition as well. Ultimately, at the end of the day, if we were to do a marketing pitch, we would say SD-WAN is great. You have to fly, but having gone through the deployments, we want to make sure the customers go through a certain journey. The journey may involve setting up a few sites at a time, and essentially crawling there, getting comfortable with it; let’s start to roll out maybe 20, 30 sites a day, which would be the walk transitioning over to the LAN.

Once you have all of the items automated, whether it’s [unintelligible 00:10:00] bring up the policy, bring up network upgrades, or any changes that you want to make, then you can start to fly as well.

So with that as a backdrop, I’ll transition off to David, who will walk through the technology and the architectural elements; also, what is the first step. As they say, a journey of a thousand miles starts with a single step. It’s important to take that first important step. So, David …

David: … talking about the specifics of what you’ve heard before from Lloyd and Ramesh. So, how do you start? You have bought into a vision of SD-WAN. You understand the principles of SD-WAN. You understand how SD-WAN is going to transition your business, but where do you start? As you’ve seen with the previous slides that Ramesh was talking to, it’s a journey. So the journey has to start somewhere.

We have drawn quite a lot of experience from our deployments. Here’s what I wanted to share with you. Look at this drawing. On the left you can see your traditional, existing NPOS network, which is based on your existing CPEs. Now you want to go and ease into – look into how SD-WAN is going to be incorporated into this network. Many organizations want to start slow and ease into this. So what we’ve observed is that people are deploying a side by side, software defined wide area network, by introducing the new CPE devices.

In this case, these are Viptela vEdge routers. Those are connected to just internet circuits. This could be broadband, cable; anything that is not consumed by your production environment. That allows you to create a safe environment to experiment on and selectively send traffic from site A and site B. Those, site A and site B, could be two branch offices, a branch office and a data center, a branch office and the cloud, a data center and the cloud. There’s really no distinction, what site A and site B really mean. This is just two locations within your network. So this is a very safe approach if you want to get an experience in running SD-WAN.

What are other start points you can have? A more meaningful starting point is, you want to extend your SD-WAN network across both your existing transport – which is primarily MPLS – and your broadband transport. In this case you build connectivity across your existing routers, across your existing CPEs, to establish this active-active pass. The pass traverses your existing infrastructure. So you can run two types of traffic side by side. You can have your existing traffic still going through the existing routers using what you can think about as an underlay pass.

Then you can have an overlay pass, which is the traffic that would be hitting the vEdge routers first and then choosing either the tunnel that goes over the internet or the tunnel that goes over the MPLS. You can see here, there is no need to provision a connectivity to the MPLS or direct connectivity to the MPLS network from the vEdge router CPE device. It’s dependent on the existing router to use that router’s connectivity to the MPLS network as a means to establish that secure virtual network.

Now, there are cases where an additional connectivity to the MPLS network is also possible. In this case, you can see the difference in the picture on the right. The vEdge routers, the CPEs, actually have connectivity into the MPLS network directly; in which case they’re able to establish a secure virtual network across an MPLS network without passing across or on top of the existing CPE devices, existing routers.

Again, all three approaches allow you to run, in parallel, your existing traffic through the underlay; and your traffic of interest where you would like to start using the principles of the software defined networking. Two types of traffic can run in parallel, whatever your starting point is. At the end of the day, what you are trying to get to: You are trying to maximize on the benefits of the SD-WAN. You’re trying to get into a situation where you have the simplicity SD-WAN brings. You have redundancy. You can see that an existing CPE device was replaced by the redundant vEdge device.

You now have a native SD-WN connectivity across all of the transfers that are available to you directly, without any dependency or reliance on existing CPE devices. Again, those CPE devices, basically, with time, can be replaced. You can transition from the previous slide, the three beginning steps that you can take, to this target state architecture.

Now, of course this does not happen overnight. It will take time to transition from the hybrid environment, where you have an existing and new SD-WAN, into an all SD-WAN environment. That journey can last anywhere between months and years.

So let’s look briefly at, who can two worlds coexist and interoperate in the interim term that you have both solutions running in parallel. The starting point is the hybrid network. So you have some portion of your network that has already transitioned into the SD-WAN approach. That would include a certain amount of data centers. Those could be original data centers or global data centers. They can be in one geography or multiple geographies. But you have a certain amount of data centers that have transitioned to an SD-WAN network. Then you have a certain amount of branch offices that have transitioned into the SD-WAN network. This is the – think about that as the core of your SD-WAN environment.

So, side by side, you have sites that are only on the MPLS network, exclusively connected to the MPLS network – these are the remote sites that still have not yet migrated into the SD-WAN environment – and you also have the sites that are exclusively on the internet. Neither one of those sites has transitioned to SD-WAN. So the question is, how would those sites talk to each other; how would those sites interoperate with the sites that have migrated to the SD-WAN environment.

So the simplest way to do that is to extend the SD-WAN fabric across those existing transports, MPLS or internet, into the data centers that are the interconnect points between the sites that are sitting on the disparate transports. So you can think about the data centers as pieces of an element that glue an MPLS only site and an internet only site into the cohesive SD-WAN fabric. Of course you can, at any given time, add an MPLS transport into the internet site, you can add an internet transport to the MPLS site to provide this fully meshed fabric. But this is an optional step, and some of our customers are happy with having this sort of a diverse approach, having sites that reside on only single transport.

Now, in addition to those, there could be sites that have not yet migrated to the SD-WAN network at all. These are the sites that still operate in an MPLS underlay space. If you think about, what are those sites, these are the sites that I mentioned; the ones that have not yet migrated; and also the services that you are consuming from the MPLS service provider. These are provided in the underlay space. Examples of those are security services or voice/video control services.

So what’s your way to communicate from the SD-WAN environment to this non SD-WAN underlay environment? Again, you can make use of data centers; in which case you can establish a BGP routing relationship between the PE-CE devices on the MPLS network, and to extend the reachability from the SD-WAN fabric, be that on a hybrid network or an exclusive MPLS or exclusive internet, into the sites that have not yet migrated at all and are sitting only on the MPLS network without any SD-WAN awareness. There is also an option for you to create a shortcut connectivity from the remote side directly by establishing a local BGP relationship between the PE and CE devices.

Now to double click a little bit on what it means about establishing a BGP connectivity between the PE and CE devices, in this picture you can see that you have three sites. You have the remote site. You have the data center site, and you have the legacy site, that have not yet migrated. There’s an MPLS network in between. The SD-WAN network –in this case, we’re demonstrating Viptela SD-WAN network that makes use of an overlay management protocol to distribute all the information about reachability and security characteristics across the site that are participating in the SD-WAN network.

So you can see here there are three subnets. There’s the [10.10.1] at the remote site, the [10.10.2] at the data center, and a [10.10.3], which is at the legacy site. If you look at the routing tables of the CPE devices or the Viptela vEdge devices, they of course have reachability into the other sites’ networks. They just advertise them across the overlay network. But they have no visibility, and they have no routing information about the site that is residing at the location that have not yet been SD-WAN enabled. Yet that location has a BGP peering to the PE device. So this is a very extended practice in the MPLS world where you have the BGP peering between the provider edge device and your customer [unintelligible 00:21:07] device or customer edge device.

So the way you would integrate this legacy site into the SD-WAN infrastructure is by enabling BGP peering on the data center site. Again, we’re here. We’re talking about the data centers as providing a means of connectivity between the overlay and underlay worlds. By enabling this, we immediately start learning the reachability for the legacy site through the underlay, with the next hub being the provider edge device at the service provider location.

When that information is learned, it is also advertised through the [OMP] to the remote site. You can see that at [10.10.3] it would also appear that the remote vEdge is a routing table. So in fact we have now established the reachability, which is a native SD-WAN reachability between the [unintelligible 00:22:03] or on the SD-WAN fabric; and also the reachability from the remote site through the overlay network to the data center and on to the underlay network into the MPLS network to get reachability into the legacy site.

So this is one example of how you can integrate a site which has not yet been enabled and is residing in the underlay space in the MPLS.

Now, we also briefly mentioned about the clouds previously. So some of the triggers for SD-WAN migration have to do with cloud. So let’s talk very quickly about two types of clouds that you can be looking at. The first is the infrastructure as service cloud. So you are a private cloud environment. You are trying to transition into the public cloud, or you’re trying to enable a hybrid cloud connectivity. That’s what your SD-WAN network consists of: sites, campuses, branches and corporate data centers. Yet you have cloud data center resources. Those could be residing on a public cloud, for example; AWS or Azure clouds, or Google clouds, or Softplayer; whatever flavor of the public cloud that is out there.

The question is, how do you bring those resources into your SD-WAN fabric? So the most convenient way of doing that is by [unintelligible 00:23:29] your virtual CPE device. You can think about this as a virtual Viptela vEdge device, which is residing directly at the cloud infrastructure. It’s becoming an integral part of your overlay. Now that cloud data center site is basically – it becomes another location on your SD-WAN fabric. So this is a very easy, very streamlined way of incorporating infrastructure as a service environment into your SD-WAN fabric.

The second one you can look at is how you adopt or start adopting software as a service cloud application while maintaining the quality of experience to access those cloud applications. This is where one of the most important things is, how do you choose which locations out of your network are designated as a point to access those cloud applications. You have choices. You have flexibility.

The first one you can opt into is by leveraging the cloud security, which is where traffic is allowed to go directly from the remote office to the cloud applications; yet, if you’re concerned about securing that communication or maybe creating some policies around what’s enable and what kind of traffic is allowed and what kind of traffic is not allowed, then you can employ things that are cloud security. So your traffic that’s on its way to the cloud applications must pass through a cloud security provider network. In our case, we have a very tight integration with [unintelligible 00:25:13] secure cloud gateways.

The second option is that come of your locations, such as larger campuses, could have an [unintelligible] security elements such as firewalls, IDS, IPS, devices that provisioned there. Those are a very traditional way of looking at delivering secure cloud application access by just inspecting the traffic that is going to the cloud applications that reside on the internet. Before it’s being allowed to exit your network, it’s being subjected to the security enforcement point.

Finally the approach of regional security is gaining very significant interest from many customers that we’re working with. How do you maintain the level of security – you’re not quite comfortable with cloud security; you still want to have your own security enforced in points in your network, in your data center, your regional data center; yet you are not comfortable deploying firewalls at every one of those remote locations.

In this case, a model service insertion through the regional data center plays very well in that. The way it’s been adopted is, you have a certain layer – four-layer, seven services; nodes; that are being advertised into the SD-WAN space. Those can be firewalls, IDS, IPS devices, or whatnot. Those are advertised into the SD-WAN fabric as being available, resources available to be consumed. Then through the SD-WAN policies, you can steer the traffic to those regional data centers based on the QS characteristics that you need to uphold for those cloud applications.

So the choice of the regional data center is tightly integrated with, what is the quality of service that you want to observe. It’s not just connectivity; its connectivity with quality of service. That traffic is steered from those branch locations into the regional data centers where it’s subjected to the security inspection. From those regional data centers, it’s allowed to proceed into the cloud applications.

So these are the two sides of the cloud adoption; the infrastructure and the service adoption, where you can seamlessly make it part of your network; and the software as a service adoption, cloud application adoption, in which case you can be very crafty and very flexible about designating the points of access to the cloud applications while keeping them secure and, more importantly, upholding the specific quality of service characteristics while accessing those cloud applications.

So with that, I want to open it up for questions and see if we have any interesting things we want to bring live, answer some questions for the time that we still have. Lloyd, Ramesh, anything you want to bring up that came up in the last 15 minutes or so?

Ramesh: Sorry. [Unintelligible 00:28:32]. So, yes, actually, a few questions have come about. David, if you go back to the migration steps, I think it will be easier to explain with that. So the question is really around, as you go through the – actually this slide, please.

David: Oh, this slide.

Ramesh: That’s right.

David: All right, just build it out.

Ramesh: Yes. So the question is really around, existing sites continue to be the way they are. How do you integrate routing into the mix? The mention of the word [unintelligible 00:29:13] BGP here, so if my data center is running BGP, and my branch is running OSPF, then can they still continue to operate?

David, do you want to take that?

David: Oh, sorry. Yes. So, the use of routing protocols is very important. It’s essential. It’s mandatory to be able to interoperate between the existing world, which is driven by the routing protocols, and the SD-WAN world, which makes use of the routing protocols yet builds a logic on top of it. So routing intelligence, comprehensive routing intelligence, is really the common denominator across all things network. So as we have outlined in this slide when we talk to that, the use of the BGP protocols to interoperate with an existing MPLS network, and the BGP is a de facto standard for an MPLS deployment, the ability to deliver the BGP intelligence and incorporate that BGP intelligence into the overlay routing that is existing on the SD-WAN fabric: These are an extremely important, essential element of the solution.

It’s not just support for routing protocols on the edges of your network. It’s the ability to extend that routing intelligence across your entire network. So there are lots of things that come into the considerations. These are the things that we’re working through with our customers when we are having architectural discussions, when we go to the proofs of concept, when we have production deployments. These are the things that are not trivial. These are things that require careful consideration and careful planning and strong expertise in both secure overlay fabrics and in underlay routing protocols to be able to deliver a meaningful service. It is absolutely not a trivial problem to solve.

Ramesh: Right. Closely associated with are a couple of questions. One is to create the overlay kernel that [unintelligible 00:31:32] with the MPLS network first to learn the destination networks. The short answer is, in order for two sites to talk to each other over the MPLS network, you need to be able to know the transport IP endpoint. Essentially, what are the IP addresses or the location endpoints of the various CE devices that are talking to the PE devices?

In most cases, that’s learned through BGP, through a service provider. So in order to create an overlay over the MPLS network, yes; you do need BGP. But that BGP need not carry the customer side prefix information. So all the prefixes that you have that stay inside the data center, inside the branch, need not pass over that BGP session. Those things can still continue to go over the overlay network. They do not have to be in the MPLS network at all.

So you do need to enable BGP, but it’s only to learn about the reachability for the PE-CE connections on the remote end. I think that brings us to the other question around zero touch provisioning as well. If you have a device that is connected over only MPLS, can you do zero touch provisioning? The short answer is, if the PE is DHCP enabled, and the PE does not have a static IP, then you can actually [unintelligible 00:33:09] based IP address and then use that to do a full BGP.

What we have seen is, many of the provider edge devices across multiple carriers do not support DHCP based IP address assignment to the PEs. So naturally, as a result of that, you have to configure an IP address on the interface, at which point it becomes a one-touch provisioning. So all you need to do is configure an IP on the CPE device or the PE device connecting into the PE. The device would still go through a full zero touch provision process; authenticate, [unintelligible 00:33:45], and then the operational network will be okay after that.

David: Right, and it’s probably worth mentioning that we are working on enabling zero touch [unintelligible 00:33:56] process over an MPLS network. So stay tuned for that, too.

Ramesh: Right. A couple of other questions. One is, what’s the impact to a host on a virtual machine based endpoint, and how much [unintelligible]. The short answer is, it depends on capacity. So we do provide guidelines around how much bandwidth encrypted throughput you can actually push through a VM with one or two VCPUs. You can use that as a benchmark, and you can scale that up linearly as well, depending on capacity. So you can [unintelligible 00:34:37] unit for, say, 100 meg, 300 meg, 500 meg, whatever that is; and then throw in as many VCPUs [unintelligible], and it can linearly scale the bandwidth, as well, accordingly.

So David, there is a question for you. How much latency does the SD-WAN controller then produce?

David: Right. So, the SD-WAN controller introduces absolute zero latency. SD-WAN controller is a fully [unintelligible 00:35:11] element. It has no data communication that happens between the vEdge routers and the SD-WAN controllers. So SD-WAN controllers could be completely remote, geographically dispersed, in any part of the world; as long as they have an IP connectivity between the vEdge routers and the controllers that we call the vSmart controllers.

So the vEdge routers and the vSmart controllers need to have IP communication between themselves, too, to exchange relevant information; but there is not a single data packet that goes between. So there is zero increase in latency based on the geographical location of those controllers.

Ramesh: Right. In fact, just to give you one of the extreme examples that we have seen, there is a controller out here in southern California that’s actually controlling a couple of sites, one in the Middle East and another one in London. So you’re really talking multiple – not just geographies, but continents as well, as a result.

So completely on a different topic, going back to discussion on BGPs: Is BGP the only routing protocol that’s supported in this case? The short answer is, no. We support the full variety of routing capabilities from static to connected to OSPF to BGP. Not just that, but we can also do [unintelligible 00:36:44] across all of them. This goes back to what David was mentioned earlier. If I have a site that’s connected – let’s say I have a site that’s on BGP, and a default route from there goes through the MPLS underlay, and it’s advertised at the [unintelligible] route through one of the other sites, well, guess what happens. You have an interesting [unintelligible] computation that needs to happen.

So these are, again, some of the things you encounter in the actual process of deployment. There are enough mechanisms and tools in place so that, A, you don’t have to think about these things. The system is intelligent enough to figure out who the source of a certain subnet or prefix is. It can avoid loops and compute the best path accordingly. In some really, really complicated cases, where you completely lose the origin of the [unintelligible 00:37:37], then you can also put policies to prevent these types of things.

David: Right, absolutely. The routing intelligence is something that needs to be propagated across the network, and it’s not something that gets deployed at the edges of your network; because that’s just not how you build networks.

Ramesh: Right. There’s a question on encryption, especially for countries that require lower grade encryption. The short answer – there’s an elaborate answer, but the short answer is yes. We do have the ability to go lower on the encryption side. There are some that require 40-bit encryption and below, at which point I would safely say it’s okay to use GRE; because if it can break that in an hour, why bother encrypting. But short answer is, yes, we can support GRE, non-encryption, lower grade encryption; or fully [unintelligible 00:38:38] with 2048-bit as well.

Another one for you, David. What about encapsulation? Is it standard or proprietary?

David: Right. Encapsulation is absolutely standard. We use standard to IPsec data plane protocol, between all of the vEdge routers in our network. We do not use any additional encapsulation. So it is traditional, standard IPsec encapsulation, natively running over an IP network with no additional encapsulation overheads.

Ramesh: Right. I think there was a request to show the interface, and I believe that means the management GUI through this WebEx. The short answer is, I think this particular WebEx session, the webinar that we have scheduled here – we were not planning to. But we have in the past shown multiple demos with the management interface. So you’re welcome to look at those, or we’ll be happy to set up a call with you and go through that in detail as well [unintelligible 00:39:53] interest.

David: Absolutely, yes. As we were trying to balance the time, we preferred to give you more meaningful information about how the system operations rather than spend the time clicking through the GUI elements. But everything that we have talked about so far is backed up by comprehensive GUI that you would be consuming, and that’s what we call vManage, which is a single pane of glass for managing, monitoring, operationalizing, troubleshooting your entire SD-WAN environment. So everything we talked about is backed up by a comprehensive GUI element as well.

Ramesh: Right. I think we have time for one last question. David, that’s for you. What happens to in-flight traffic when [unintelligible 00:40:42]?

David: That’s a very good question. So in flight – what we do as far as distributing the traffic across the tunnels is, we support per flow load sharing, which is years proven in the networking industry as the best possible way to send traffic across multiple paths without introducing things like resequencing packets, which are very computationally intensive things. So we are following industry proven practices. So for the traffic that is already in flight and is being sent across the network, that traffic is – of course, it depends where the failure occurs.

If it occurs before, in front of, the packet, then the packet will be lost. If a failure occurs behind a packet, then the packet will be delivered. So it’s a little bit of a tricky question. We’d need to investigate further where the failure occurs. But at the end of the day, when the path goes down, the system seamlessly starts using all other means, all other tunnels, that are available, as connectivity between the two vEdge routers. So at any given time, there is no – as long as there is a path between the vEdge routers, the traffic will be delivered; either across a tunnel or across multiple tunnels with active-active procession load sharing.

There is absolutely no consideration for things like rerouting the traffic or rerouting meter flow. All of that is absolutely industry standard, things that are years proven in decades of experience.

Ramesh: I think that’s all the time that we have. So thank you, everyone, for listening in. There are a few other questions as well. We’ll follow up with you individually on those questions. Thanks again for your time, and we look forward to seeing you in another webinar.

David: Thank you very much, everybody, for joining. Hopefully this was informative. Feel free to reach back to us or check out more information at This webinar is also going to be posted. The first slide has our contact information as well, so feel free to reach out for any clarification if you need.

Watch Now