Three Steps to Reduce Costs & Cloud-Proof Your WAN

Viptela’s Head of Global Marketing, Lloyd Noronha joins CFN Services’ Head of Product, Brian Heacox to take a deep dive into the evolving costs & cloud performance challenges of the Enterprise WAN.

Webinar Highlights

  • Gain insight on the shifting cloud application & IaaS landscape
  • Discover how SD-WAN works to increase cloud network performance & achieve rapid branch deployment
  • Explore three steps to reduce rising bandwidth and commodity costs while adding value to the WAN

Presenters

Lloyd Noronha
Lloyd Noronha Head of Global Marketing, Viptela

Lloyd heads the Global Marketing team at Viptela. He brings 20+ years of experience in technology and business practices to drive cutting-edge marketing strategies in B2B environments.

brain heacox cfn
Brian Heacox Head of Product, CFN Services

Brian heads the Product team for CFN Services. He is focused on shortening the cycle of scientific, industrial, and technological innovation by emancipating underlying IT & Communications systems held hostage by status quo enterprise architectures.

Transcript

Brian: Hey, guys, this is Brian. Thanks, Courtney. Once again, my name is Brian Heacox. I’m the head of product here at CFN Services. For those of you who are new to CFN, we have built a global application delivery platform focused on not so much network but more so the applications that the network supports. We’ve grown out of the capital markets. We’ve built the lowest latency capital markets infrastructure for high-frequency traders and have taken that expertise and taken that deep knowledge of the telecommunication space and the application delivery space to put something really cool together for this new age of hybrid IT utilizing both on-prem and cloud environments for your mission-critical application infrastructure.

If we go to the next slide, we’re delivering application performance across the world; 185 markets in 41 countries. We do this with a infrastructure of 60 regional cloud hubs. We call them app hubs, with a roadmap to accelerate 400 cloud applications, the same ones that are mission-critical to your business, those like Salesforce, those like Workday, those like Service Now, those applications that are, in some degree, the lifeblood of your in-house IT. Of course, just as important as the infrastructure are the service vehicles to migrate your infrastructure out into the cloud.

Now that you know a little bit more about us, just to better tailor the presentation towards the audience, I’m very curious as to what your current WAN architecture looks like. If we could go ahead and start the poll, you’ll see the options there within your Web-X interface. We’ll give you a couple of minutes just to get a sense of what that does like. Do you run a MPLS VPN? You have two MPLS VPN’s, one for primary and one for backup? Is your WAN exclusively built on the public internet? Have you deployed a hybrid WAN as architecture? If you don’t know or if you’ve got another architecture that you’ve built yourself or created some other way, feel free to press “I don’t know.”

Courtney, let me know when we’ve completed the poll and it will be interesting to take a look at that result there.

Courtney: Great. Looks like the poll is completed. To share those, we are just about there. Okay.

Brian: Excellent, cool. Cool. From what I can see, we’ve got a majority of folks who are using a pure MPLS VPN as their primary. Those, which I’m very happy to see, have moved to a hybrid structure where they’re utilizing the best of both worlds in terms of their private connectivity and internet to leverage speed and performance. That’s good. That’s a great baseline. I’d love to compare those architectures down the road.

What I think is most interesting here, or most compelling, is this new cloud-ready WAN. The goal here is to show the audience what you can do to architect your WAN in such a way that it is built for the cloud. Viptela’s CFN was born in the cloud age. This infrastructure, this architecture is cloud ready. Let’s take a look at a case study here.

This is a joint customer of CFN and Viptela in construction and architecture. They’re global firm, 150 locations across the world and they move rapidly as projects get completed and dry up. New sites come online every month. There’s over 5000 employees. They are utilizing two different infrastructure service vehicles, cloud service providers for their mission-critical IT, and they’ve got an Asian branch IT infrastructure, which you can imagine, you know, if you’ve got 150 sites across the board, that refresh cycle is constant. As locations move, quite a challenge. A couple of the challenge that they are facing, and they did go to their previous MPLS provider to go solve some of these problems.

Their network architecture was complex and they had locked in internet gateways at their two data centers, which meant that internet performance was really slow for users across the world. For users in Dubai, for users in Miami, for users in San Jose, that was all being backhauled. That wasn’t really good, because it raised MPLS prices because they had to backhauled and made the internet slower. Really, when they added a new site, that was a challenge as well. The MPLS delivery times are traditionally at least 90 days. In a lot of cases, 120 days. With customer locations popping up from month-to-month, that needed to change. That lead time needed to be shortened. But, they weren’t able to find it.

Latency was disrupting application performance, especially when they’ve got large file sizes, they are 60 gig files that need to get pushed around from architect to architect, from user to user, across this private infrastructure, but bandwidth and round-trip are really important for getting that collaboration going. If they wanted to do this on the cloud, that became really expensive. You know, you can get an MPLS line to a cloud provider such as Amazon or Microsoft Azure, but if you want to get that data center collection and you need that throughput of, let’s say, gigabyte, that can be very, very cost prohibitive.

When Viptela and CFN introduced their solution, our solution, there were some pretty astounding results. We want to show you how to achieve these results yourself in the following slide. What we were able to accomplish was 20 times faster simulation modeling. Very important for making products move faster, smoother, 25% reduction in the total cost of ownership. This number includes network spend, data center spend. Network spend is it typically a large portion of that, but it wasn’t just network. This is network infrastructure and data center infrastructure.

Also, with that lower cost of ownership, that’s a 2-1/2 times bandwidth increase per employee, providing better-quality service across the board and at the same time reducing the critical, critical, critical application latency across their suite of applications that they run in the data centers and they run in the cloud by 30%. Meaning, that they could start to work on the cool things like augmented reality where architects are able to collaborate in real time with avatars and their colleagues across the world. They’re able to visualize buildings going up in front of them, walls going up right in front of their eyes. That is not possible without a low-latency, high-performance platform that together Viptela and CFN have architected for them.

How does this work? What is the fundamental shift? Years ago, when applications were not in the cloud, they used to be in a familiar place. The four walls of that data center, that data enterprise, that your company had erected and managed servers in. That network that you bought from provider A or provider B delivered performance and delivered private connectivity directly into the data center, where you had control over how every application was routed, it was all inbound in the network, so the whole firewall issue was, going out to the internet was not a big deal, and you could control where applications were flowing because it was all on that.

But as that landscape has started to move, those applications are no longer within control. They are somewhere out on the internet. Of course, the internet is really just the network of networks, so it does exist in some data center somewhere, but you just don’t know where. The fundamental question is; if those applications, the landscape is shifted, and the MPLS or WAN that you’ve built was built to support those applications, then shouldn’t my network shift to meet the new application landscape? It makes sense. Your network is built to support applications. If your applications have moved, you need to be aware of it.

In moving to the cloud and moving to this cloud-ready WAN when your applications have moved outside the data center, there are some things that, if you start to forget about this infrastructure, what can happen? Well, if applications are now over the internet and you are not able to see them, not able to control them, there are performance issues. If your Salesforce instance is in Los Angeles and most of your employees, your employee base, are sitting on the East Coast, well about noontime, you’re going to see some performance degradation, because that’s when the West Coast is just getting online.

When you have all these applications coming over the same internet gateway, there’s no way to say, “Hey, which application is which?” With traditional technologies, you’re not able to prioritize which internet app is more important than the other. That’s when you got bandwidth contention. You might be sacrificing the quality of your most important applications to those that aren’t important. Maybe it’s YouTube, maybe it’s online shopping that’s going on behind the scenes in the office. It could be anything.

To do that, some people fight this risk, or fight this with VPNs. If your cloud service provider has agreed to enable a VPN connection from your private WAN into their network, that’s a possibility. But, you do not always get what you ask for their and those VPNs breaking down and then constantly having to manage and patch that firewall or VPN device that you use every time those IP addresses change, and those cloud providers are always IPs around. That contributes to a loss of control.

In a lot of frameworks, there is a cost-benefit play when you’re comparing the traditional network architectures, the one that you’ve all voted on, whereas the MPLS can be pretty expensive, 50 to 80 bucks is towards the lower range of what we see MPLS prices today. But, you are getting the reliability. You are getting pretty good performance. Is not deterministic, but you’re getting a latency range of, from A to Z, I’m expecting that this might be within so many mics and that’s good enough to run the application. If you got a dual MPLS, you’re paying double that, but you’re getting additional reliability because you’ve got a great backup solution and you’re paying for that premium. That’s okay.

Some of you elected for an Internet-based WAN where cost is probably your main driver. There is a significant decrease in network bandwidth cost for the Internet, but you’re sacrificing reliability at best effort performance for reaching your applications and reaching back to other sites across your network. Your headquarters, your data center, etc.

Then, like I said, I noticed there is a number of hybrid WAN responses, very good. That’s where you’ve decided say, “Hey, I can use maybe MPLS technology as my primary and utilize internet where need be across the rest of the WAN.” Maybe I have a small site that doesn’t need a full MPLS. Maybe that Internet will just be a backup for a site that does have MPLS. It’s a good way to achieve additional reliability without spending 2X the cost of traditional MPLS and private WAN. That’s how it performs in the on-prem world. How does it perform in this new ecosystem where applications are in the cloud?

Well, traditionally, it’s still got to go out to the Internet, so you may have great reliability site to site, and your MPLS network maybe up and the site might be up and you might have internet access, but when you get to the cloud, performance, latency are all going to be best effort because you do not know where that traffic is headed. It might even be slower than if you had internet, broadband internet directly at your branch location. What does this cloud-ready WAN do? How does it solve both problems for both on-prem and cloud? Well, the great thing about it is through the architecture, the cost is at a much better price point.

You might say, “Hey, how is that possible? Great performance at a better price?” Well, it’s in the architecture. We are happy to show you the different components of how to achieve that architecture yourself as we’ve shown it for customers. The three elements of this cloud-ready WAN are one, software-defined WAN. Viptela, Lloyd, are both specialists are in SD-WAN. It is also this concept, this architectural concept of the carrier-neutral data center. Third are these cloud interconnects to get you blazing fast performance for cloud applications. With that, I’m going to hand it off to Lloyd to cover the SD-WAN and follow-up with the other components afterwards.

Lloyd: Wonderful, thank you so much, Brian for that. I’m hoping everyone can hear me okay. From Brian’s slides, I think you clearly understood that there is a real problem in terms of cloud adoption in enterprises. That ends up being a major trigger point when customers come to us and talk to us about the reason to make changes. Specifically, to improve cloud and SAS performance, there are two distinct components to this.

The first, as Brian pointed out, the first one involves the network piece, which is everything that happens until you can reach an internet exit point. The second is everything that happens after that, which is the remaining two elements that Brian showed you, which he covered within the presentation. With SD-WAN, it’s just reached a point with we have reached a critical stage in terms of SD-WAN deployment. Why SD-WAN matters today is no longer being a question being asked by early adopters.

It’s reached a stage where the large markets keep asking us these questions. It’s widely covered by all major analysts today, and if you look at most of the forecasts that analysts put out there, it is probably one of the most aggressive ramps of networking technology that we’ve seen in the recent years. For example, IDC is calling this a market is going to grow by over 90% in the next few years, to reach about 6 billion in 2020. Gartner thinks 30% of enterprises, which is a stunningly large number, will adopt SD-WAN in the next few years.

Within SD-WAN, Viptela specifically has been there for about four years. We are clearly now the largest player in SD-WAN in all the major verticals in the space. If you look at retail banking, we have the largest deployments today. We are probably the only solution that’s most widely deployed across all verticals. We have about 15,000 production deployments out there of Viptela, and all this within the last two years, since we’ve been shipping the product. Our niche specifically has been solving more, larger and sophisticated networks, moving down to the mid-market. From a Fortune 500 standpoint, that is the place where we’ve been able to dominate and solve problems in all major categories, and now we are aggressively ramping up on the big market standpoint.

In terms of partnerships, we are partnering with most of the leading service providers across the globe. Some of these are already public, with Verizon and SingTel, and some of them you’re going to hear about as we move into the next few months. With that, I want to take a step back and explain what are the various reasons enterprises are driving towards thinking about SD-WAN and what has made this such an urgent situation? To understand this, I won’t be able to cover all the reasons, but if I were to pick up two or three, clearly, cloud as adoption has been the biggest trigger.

Enterprises move to Office 355 or Azure and discover that their latency is terrible. To fix the problem, they tried to get to and on-prem Office 365 solution, but they still face terrible issues such that they now have to inherit the management of that solution, which they didn’t have to do with the cloud hosted solution. These are the kind of tricky situations we are constantly called into from a cloud perspective, and that’s just one application, but if you look at applications across the board, there is a consistent pattern of latency.

The reason is very simple. As Brian pointed out, there is an inefficient architecture today on the WAN side. The entire WAN architecture was crafted for time when applications were cited in the data center and today most of them are in the cloud. We are force-fitting an old architecture onto a new agile application model, and that’s breaking the entire IT architecture.

That the first reason. The second reason are more classic. We have enterprises will have multiple disjointed WAN solutions. They have dual MPLS that are managed differently by two carriers and they might have a third internet connection which they are managing by themselves also. So, we have these large, complicated, sophisticated networking systems that have become so complex that to roll out policy changes or to implement a new service is taking anywhere between 6 to 18 months. That is a broken model, because most of the teams are able to move very quickly, but the networking team is staffed with an old architecture that is using co-device management, etc.

That, combined with the fact that bandwidth itself is really expensive, alluding to Brian’s. MPLS bandwidth is just way too expensive, and traffic is growing 30% year-over-year. Also, the security model is very fragmented today. You have one security model for internet, one for MPLS, and you have to roll out a quick policy change based on some geographical event or some application event, and then you realize you got to do it across the multiple networks on a per-device basis, and that’s what leads to most of the complexity and inconsistent security framework.

That’s where SD-WAN comes in to solve the problem. The way we solve this is very simple. We abstract the WAN services software and overlay it over the existing infrastructure. We essentially take most of the intelligence in the WAN, we package it as an overlay solution that fits over internet, MPLS, LP, etc., based on these are the requirements of the application. The entire solution is cloud-managed, and essentially you are able to not only to have a Viptela instance at each of your sites, but you can have them in the cloud, as well. What this fundamentally allows you to do is you are no longer tied to looking at WAN on a cloud-linked basis. You’re just looking at WAN as a service that provides reliable application delivery for both cloud and in-house applications.

What I mean by that is if you are an enterprise like a bank, like on large banks deployments, for them resiliency is critical. For them, when they look at SD‑WAN, they lead with resiliency. They want to have the Internet, MPLS and LT managed as a single WAN and if there’s any degradation happening on any link, they want the applications proactively steered on the best possible link, especially the critical applications. That’s the major focus within the bank.

On the healthcare front, in additional to reliability, the big focus is on security. They want to make sure that all data going is encrypted, all devices are authenticated. They scale segmentation between the guest wireless and the different systems that they have within the healthcare to meet the regulations. Different organizations have different requirements, so it’s very important for the WAN to not only be agile, but to also meet the security requirements and meet the reliability requirements, etc., of the large-scale, mission-critical institutions.

Now, our philosophy in SD-WAN, if we were to break this up into an architectural model is essentially very simple. We have a three-tier architecture. The bottom most tier is where we essentially, in many ways, commoditize the underlay. We make sure that the entire WAN works consistently the same whether it’s on broadband, MPLS or cellular. In fact, it’s one single WAN with multiple underlays. At the same time, we are aware of the links below. We are aware of the application performance, the performance of each of those links below, so everything that you want to layer in from a routing, security, segmentation, QoS standpoint happens consistently on the underlay architecture.

If you have, for example, a critical application like voice that you think should never go down, you can handle that by means of a policy. If you want to say that low priority applications like Facebook should never use a high-value infrastructure like MPLS, you can ensure that happens with policy. It becomes one consistent policy that is centrally managed on this underlay, and that gives you a very strong delivery platform. Over that, you can then layer in an application framework where you’re thinking about application SLAs, you’re thinking about what locations you want to have your firewalls, how do you want to find a cloud, your cloud paths, etc., and all this gets managed consistently with a very new-age dashboard that operations team, as well as the architecture teams can collaborate and see up-to-the-minute performance across the full network.

You can get complete visibility of the full network. You can see which applications are performing or having problems at any given time. You can also see which portions of your networks have outages, but at the same time, none of the applications are affected. You get visibility into the application SLA pieces. This becomes a very critical path; the first step to building a sound architecture to solve, essentially, your cloud performance pieces. I want to take us back to Brian at this point to explain the other elements of it.

Brian: Excellent, thank you, Lloyd. SD-WAN, I will reiterate, is a critical, critical component of the solutions. Sorry, I received feedback that I’m not being heard. Is everything good?

Courtney: You’re coming in clear there, Brian, thanks.

Brian: Oh, great. Sorry, sorry. Sorry to delay there. Step two, SD-WAN is step one. Step two is this notion of the carrier-neutral data center. The carrier neutral data center is a concept that goes back to the center of the internet, actually, were different telecommunication providers decided to come to one central place in order to interconnect their networks. That’s the internet. These carrier-neutral data centers have dozens and dozens of carriers within them. Those are names like CoreSite, Equinix, Telex, Telecity, just acquired by Equinix and so forth.

You know what happens, if the networks providers who were there who connected everyone to customers, the content providers moved in. Once the content providers moved in, the cloud providers moved in, because what they said was, “If I need to be close to my customers, if I want the best performance to deliver applications to those customers, I need to interconnect my application, my data center infrastructure directly with those networks.” Some pretty fascinating things happened where the cloud service providers became really well tied into network services.

With that notion, it makes a lot of sense to put the enterprise infrastructure in the same location. That way your servers, your computing assets, your facilities are only a data center hop away from those cloud services that you are now consuming. With that same, same location, because there are dozens of carriers available, that is economics 101 in that increased supply creates more competitive pricing. So, you have additional carriers, one, two, three, four, five, six, seven, eight, nine, ten that you can all ask to quote out connectivity to a specific office site.

Depending on those requirements, you might go for the low-cost provider, or you might go for those who can deploy in a 30-day lead time. Those options are all available when you place infrastructure in these carrier-neutral facilities. That is how we delivered all of our leading performance for customers. When we’re able to bring the cloud services into the core of the enterprise network, into the core of the enterprise WAN, our customers and those who have adopted this architecture enjoy the benefit of LAN-like performance for those apps that could previously only accessed over the internet.

Here’s a simple case study. This was done by a partner of ours in Equinix, which compared the time to upload or backup their Oracle database over the internet versus using one of these cloud interconnects. Sent over the Internet, but over traditional internet MPLS without a cloud interconnect. Now, traditionally, this process took 6-1/2 hours. With the cloud interconnect, this was done in a half hour. Now, as a backup their database on the system basis, to maintain good resiliency in case the original were to have an issue, that is readily available and much easier to reprocess and bring back on that as the primary database resource.

This can be done for DR backups, this can be done for accelerating cloud transfers in and out of something like AWS Glacier for archiving. This can be done for improving performance to OSX and operation systems such as Service Now, to improve the speed with which your pages load and your systems reload when the cloud is ingrained in the center of your network.

How does this look? How do these three components actually come together? Here is a representation architecture of what this looks like for that construction an architecture firm, CDM Smith. They’ve deployed the SD-WAN Viptelas at each of their office locations, so they enjoy the primary private line to their facility when they want to send sensitive data and when they want to send the high-quality traffic.

What Viptela does is it monitors both links. It monitors the secondary length, the broadband Internet or the DIA and says, “I’m going to route this application …” Viptela is intelligent in that it’s identified a number of cloud application, and it can tell, if it’s a cloud application, hey that one might ride over the private line, and that’s going to go into the internet gateway that sits in one of these app hub, carrier-neutral data centers that you see with the orange square. Because Viptela and SD-WAN are smart, if there’s an issue on that line or if there’s a little bit of packet loss, it sees better performance on DIA broadband, it’s going to switch it over to the DIA broadband.

It’s enabling you to get the best performance at all times. But, what makes us powerful is that by interconnecting into the cloud service providers in the core of this wired area network, that performance is already blazing fast, and it is deterministic.

In summary, there are three main components that you should be thinking about in order to cloud-proof your WAN. Cloud interconnects to improve performance of the applications that are now no longer in your data center. How do you do that? You go take a look at carrier-neutral data centers or a service provider that helps you identify where those are and what your cloud applications are so that you can find the best connectivity out there and be able to select options based on cost, based on delivery time frames, and SD-WAN to manage that all. Manage the applications as they overlay on top of this new enterprise application delivery platform.

Lloyd: Okay, thank you so much, Brian. That wraps up our presentation. We have a interesting set of questions that we got through the presentation, so if you have a few more, please shoot them in your Q & A and we’ll take them one at a time in the next few minutes. After that, you have more questions, we’ll answer them by email. The first question, I think this is both to Brian and myself is a monitoring of application latency an integral part of your solution or are third-party products required to do this?

I’m going to the SD-WAN piece of the solution. From an SD-WAN standpoint, wherever we have the end-to-end visibility of applications, we are able to essentially give you all application characteristics on the SD-WAN. Now, it gets different, of course, when the application itself resides on the cloud, and for that I’ll let Brian answer how an enterprise can monitor application performance within CFN Services. Brian?

Brian: Sure, thanks Lloyd. Yeah, so CFN, our mission is that the network, again, supports applications, so we’re chiefly concerned with the applications and how they ride on top of the application delivery platform. The way we tackle this issue is we deploy telemetry devices that say that every location where you’re connecting to the CFN app hub platform. Those devices are intended so that you can run scripts of each of those applications, whether they’re in the cloud, whether they’re local on-prem, and be able to monitor latency, packet loss, jitter for real-time load requests.

Lloyd: Okay, okay. Excellent. There are a couple of questions around application SLA. How can we guarantee reliability when the transport of internet itself is unreliable? Also, there are questions around BTP flats, etc., but they all get bucketed into the fact that internet is unreliable, how can SD-WAN be reliable, and as a result how can Viptela and CFN offer reliable service? The answer to that, again, from and SD-WAN standpoint is that we do not make links perform better than what they are. What we enable, what SD-WAN enables you to do is have complete control on all parameters out there.

For example, if you are an institution that requires extreme reliability of your applications, like healthcare, like banking, then the architectures we’ve seen our customers deploy are one MPLS with one or two broadband links and an MPE link. They are ensured that at any given time, there’s no failure. The mechanism and what that happens is we do, almost on a per-second basis, link quality characterization. We know the loss, latency, jitter, round-trip time of each and every link on the network. We have full mesh tunnels built across the full network in a typical architecture.

When you set a policy like voice should always have less than 100 millisecond latency at all times, we are able to determine at all given times if voice is going on a … We are steering the voice applications on a link that has that characteristic. If it’s more than one link that meets that characteristic, then we use both of those links. At any given time, if you have MPLS, for example, that has a brownout or is flaky at any given time, we detect that right away and steer the application of that link immediately.

The critical applications are always treated in real time. They are steered on the best path possible based on the SLA that’s defined in the policy. Now, from a cloud standpoint, Brian, do you have examples of how resiliency is handled from a cloud standpoint?

Brian: I do, I do. Let’s take an example of you’re running applications, let’s say they’re in your AWS infrastructure. With that, you might be running your infrastructure in a single region. How do you ensure resiliency to that application that exists in that infrastructure, and how do you do that for every single site that you may have on your network and need to access that? Because of the architecture of the carrier-neutral datacenter and because the app hub is built with this next generation architecture in mind, we connect into Amazon in every single region around the world.

Directly connected, that’s not an internet connection, so you’re privately connected as if your VPC were just another site on your private network, we’ve plugged you in and we provide a deterministic latency back to every single site. As long as there’s a private line out to your edge, we can say with certainty, this is how fast that connection, that round trip will operate, giving you that stability of the underlay so that when Viptela operates on top of it, most of the time it’s going to know that that private line into the CFN app hub platform is going to operate at a high-performance level, and if it happens that the private line were to sever, it would use the internet as an additional backup or, more likely, hone into one of our local locations nearest to that branch location and hop on the private network again, so providing a internet last-mile, but a private, dedicated, deterministic middle mile to achieve the resiliency, and do it at a stronger performance level.

Lloyd: Thank you. Thank you, Brian. The next question is, again, this is to you, Brian, “Where can I find a list of the 400 applications that Viptela and CFN jointly optimize?”

Brian: Excellent. Yeah. We are tracking that list of 400. I believe we’ve got 30 or so connected, and there is a link on our site. I can make sure to send that out later after this call, but if you have any questions as to specifically where your most important cloud applications reside and how you can get connected to them to improve performance, feel free to reach out to me. Oh, it looks like the slide is already up there. Info@cfnservices.com. I’ll make sure that I get forwarded to it and I can help you map out where exactly that might be and then get to a list closer to that 400-application roadmap.

I don’t know where yours is, Lloyd, the applications you’ve predefined in Viptela. Do you have a list of those?

Lloyd: Yeah. So, if the question was specifically around cloud applications, then I would say that’s where the CFN list comes from. In our case, we can identify and optimize around 3000 applications using DPI signatures, but again, then we need to get into the semantics of which of these applications are cloud, which of these applications are hosted within the enterprise, etc.

That’s a longer conversation, but the short answer there is 3000 applications. Again, those applications are listed on our documentation. Happy to share that. Next question is, “Does Viptela support [QM] devices at a location to prevent hybrid failure?” The answer is yes, and in fact not only that, our entire architecture is based first on redundancy at every level. If you look at the bottom level, from the link level to the device level to the controller level and to the entire architecture level, there’s redundancy built at every single stage, and every single failure scenario is address almost instantly within a couple of seconds in our architectures.

In case of the bank example again, 3000 sites, you will see PE using broadband MPLS LTE, retail, same situation, built only on broadband. In the joint example that we shared with you with CFM is a global enterprise. In fact, a lot of sites in China, a lot of sites in US and Europe are using MPLS and broadband. Many, many sites do LCP. The answer to that is yes.

The next question is, oh, just give me a second. “Do you support WAN optimization as part of SD-WAN?” The answer to that is yes. Again, our large deployments are with the larger WAN optimization players like Riverbed out there, so we have a lot of production deployments along with Riverbed SD-WAN. Those are mostly the questions. We have a few more that are coming in, but we are about almost out of time.

Let me take one more question here. “Do you use SD-WAN in a private cloud?” Again, the answer is yes. SD-WAN inherently is a private-enterprise WAN, so it only absolutely meets the requirements of a private cloud. In fact, the objective of this conversation with CFN Services was to show that the same benefits you get from a private enterprise network and a private cloud is getting expanded to the public cloud with the CFN Services solution. Yes, the answer to that is private cloud definitely, and now we’re getting the benefit of the public cloud optimization, too.

With that, I want to say we’ve enjoyed this presentation. The questions were great, the recording will be available very soon. Maybe by tomorrow. We will share the recording with everyone who registered for the webinar. Brian, any parting words from your side?

Brian: I’d just like to thank the audience for hopping on this afternoon, this morning, this evening, wherever they may be. Of course, thanks to you guys for putting this together as well. I really enjoyed it. Hope people learned something, again, any questions feel free to reach out to me at info@cfnservices.com and I’ll be sure to help you out.

Lloyd: Thank you.

Watch Now