Viptela is now part of Cisco.

Live Demo: Top Deployed SD-WAN Use Cases (Part One)

This live demo will cover the major use cases of SD-WAN that are applicable to both large and small enterprise scenarios. These include:

  • Brownfield deployment: Inserting SD-WAN into an existing MPLS WAN (i.e. creating an overlay over MPLS and Broadband)
  • Enabling segmentation, service insertion and extranet policies for a business partner network
  • Extending the WAN to the Cloud (AWS/Azure)
  • Enabling on-prem security with Palo Alto Networks (service chaining)
  • Defining Application-Aware policies for guaranteed SLA (real-time traffic steering with QoS)
  • Enabling cloud-security capabilities with Zscaler (service chaining)
  • Defining policies for predictable access to Office 365 and other SaaS applications

The foundational technology elements of SD-WAN that will be covered include:

  • Single overlay over MPLS, Broadband and LTE
  • Overlay routing with OSPF, BGP, VRRP and IGMP
  • Deep packet inspection (DPI) policies
  • Authentication, encryption, PKI
  • Zero-touch provisioning
  • Segmentation and per-segment topologies
  • Service insertion and service chaining
  • Virtualized elements for Cloud (IaaS+PaaS)
  • Cloud-security and on-prem security integration
  • Integrated Management and third-party management with SNMP, NetConf, and RESTful interfaces
  • Internet gateways (DIA, Regional Exit, Centralized exits)

Download Slides Here

Presenter

David Klebanov Director of Technical Marketing, Viptela

Senior infrastructure technologies professional with over 15 years of extensive experience in designing and deploying complex multidisciplinary networking environments.

Transcript

Lloyd: Hello. Thank you for joining this FutureWAN session on the live demo, the live demo on software-defined WAN. We have David Klebanov today who will be executing the full session. Before we get started, a couple of housekeeping announcements. If you click on the attachments and link section of the presentations you’re going to find the entire presentation for this demo. You’re also going to find a couple of good resources for downloads, especially educational resources related to case studies and feature comparisons.

In addition to that we have a peer insight survey. So all the end users that are participating in the FutureWAN summit are essentially participating in this survey, and the results are available for everybody who participates. So if you click on it and you provide your answers to that you’ll get a copy of the results and you’ll be able to see what the others answered to those questions as well.

Without much delay, I would like to turn it over to David Klebanov. Through this session if you have any questions, ask your questions in a Q&A window. We have a back session today so we’re hoping to get to some good Q&A, but in case we run out of time for any reason, due to the demo time, we will follow up with a subsequent webinar to complete the remaining use cases.

So David, take it away.

David: Right, thank you very much Lloyd for the introduction. Good morning, good afternoon, good evening everybody. So today we’re going to talk about, we’re going to have a demonstration, a live demonstration of Viptela SD-WAN platform. We’re going to walk through a couple of use cases that are deployed at one of our customers. I’ve taken those and we are going to talk about those in the next 45 minutes, so let’s get started.

So before we jump into individual use cases, I wanted to walk you through the customer journey, as I mentioned, a customer of ours, and how would they go about implementing a feature-by-feature in a functionality, the functionality in the SD-WAN platform. So the first thing that the customer is after is the migration from the pure MPLS network into the hybrid network, which makes use of the MLPS and the broadband internet at the same time.

Next we want to look at the Brownfield Environment Integration because when you are migrating from a traditional network environment into a SD-WAN environment you’re always going to be in a situation where you still have some sites that have not yet migrated. So what we’re going to walk through is to see how the Brownfield Environment is integrated into the SD-WAN fabric. Then we’re going to look at the providing regionalization of the services, and as you can see on the left hand side, you have hub one and hub two. These are the regional hub locations with serviced deployed in them.

So the first service that we’re going to utilize in those locations is actually the internet exit. So we’re going to provide a regional internet access for the customer from the SD-WAN site, so they can exit the internet, and instead of just backhauling everything to the datacenter they can use the original internet exit points to access internet resources.

The next set of resources is the Palo Alto Firewall in this Snort IDS. These are deployed in hub one and hub two locations. So we’re going to do a service insertion and service chaining through the Palo Alto Firewall and the Snort IDS, hopping between the hub one location and the hub two location to provide security services enforcement, all the traffic that goes from the SD-WAN site into the datacenter.

Then an introduction of the cloud security services through the Zscaler to offload traffic that goes into the internet directly or with a direct internet access at the remote sites. Then the connectivity into the hybrid cloud through deployment of Viptela vEdge cloud in the AWS VPC environment. And finally, the connectivity to an external party such as a business partner, which would be a business partner access to allow and  secure an inspected access through the Palo Alto Firewall that is inserted through hub one into the web portal, web business partner portal in the datacenter.        So that is going to be the outline of our presentation today, and again, this is taken from an actual customer deployment, and let’s get right into this.

Let’s address an MPLS to hybrid migration. This is the focus of our discussion here, is that as you can see, the SD-WAN site has been provisioned, and now we want to add an internet circuit, a broadband circuit into the site that is otherwise operating purely depending on the MPLS private network from the service provider.

Viptela vEdge routers are deployed at the site, which is connected to the MPLS network. Viptela vEdge devices have a full support for a range of dynamic routing protocols in both LAN and WAN interfaces. They can be integrated behind an existing MPLS [CE] router. They can also be a replacement for an MPLS CE router. Now what we’re talking about is connecting a broadband circuit into the Viptela vEdge device. There is no need for you to maintain any external firewalls between the Viptela vEdge devices and the broadband connection. Viptela vEdge devices come with embedded zone-based security. They are run on a hardened platform to provide zone-based segmentation, a control plane, a protection, and the certificate-based authentication to join the SD-WAN fabric. So the deployment in this case does not require neither MPLS CE device to connect to an MPLS network, nor an external firewall to connect to the internet. Let’s see how it’s actually executed.

So in here you see there’s this Viptela vManage, which is a management graphical use of interface, which is a single pane of glass for the entire Viptela SD-WAN solution. So let me go the monitor. Let me got to network. Here you can see all of the devices that are deployed in the SD-WAN solution. Here, of course, it’s a small-scale deployment compared to the actual customer deployment, just so we can test and show you the functionality.

So here I have the remote site device, which is the vEdge device that is deployed at the remote site. If I click on that and I drill into the properties of that vEdge device at the remote site, if I look at the control connections, I can see that the device is building an SD-WAN fabric, in fact over the MPLS circuit. So these are the control connections that are established with the management system, which is the vManage and the controllers, which is a scale out controlled architecture. These are the vSmart controllers. There’s two vSmart controllers in the system.

If I go to the real time and I look at actual connections, the actual data connections that are established on this, from this vEdge device, I can see that there are three channels that get established through the MPLS. So you can see both data and control [burn] connections are established over the MPLS network.

Now, what I am going to do is I’m going to enable a broadband connection. So the broadband connection has already been provisioned, so let’s consider the service provider has already dropped off a broadband circuit. It’s already been configured. The vEdge device, it also been configured and there is a configuration template. And you can see this is the configuration template for the device, which is applied to the remote site. If I go and I view the settings of that configuration template I can see in here, under the VPN 0, I can see interface [Gigabit] zero-one that has been provisioned.

So if I go now into the features and I look at interface Gigabit zero-one and I added that interface, I can see that the interface has been put into the shutdown mode. So let’s just un-shut this interface and update the configuration. vManage is now going to perform a configuration push from the management system, from the vManage into the vEdge device. Now let’s give it a second when the vManage pushes the configuration into the vEdge. Now what it’s going to do, it’s basically going to un-shut the interface on the vEdge device and that’s going to bring up the interface, which is going to establish a control and data [brain] connectivity from the remote site to the rest of the environment through both MPLS and broadband.

So you see the account that has been applied. So let’s go back into the monitor network, choose the remote site again, look at the control connections. It may take a few seconds for the control connections to come up over the broadband. You can see now the control connections have come up over the broadband. So now we have the control connections that are going over the MPLS network as well as the broadband. If we go into the real time, and I would like to know what are the data plane connections that are being established from the remote side device, I can see now that it established both MPLS and these internet connections through the remote sites.     So that site that has been operating as an MPLS only site has now been converted into the hybrid WAN site, that utilizes both MPLS and business internet circuits.

So now I can take a subsequent step and start deploying hybrid services on it.  For example, of course, the encryption extends seamlessly across both MPLS and internet segmentation. It extends across both QS characteristic, extend across both. So all of the services that were available for an MPLS only connection are now available through both the MPLS and business internet, and this site has been fully migrated to the hybrid site.

All right, go back to the dashboard. Let’s move on. Let’s go back to the slides and look at the next use case. Well, that kind of took us back a few steps, but let’s advance to the Brownfield Integration. So what we’re talking here is that we’re talking about an existing site. In this case we’re going to use [Cisco CSR 1000V] platform, which is the virtual router, Cisco’s virtual router, as something that represents a legacy cite for us. That site is connected to an MPLS network only. Now we have a site that is on the SD-WAN network and we would like to maintain connectivity or gain connectivity from the SD-WAN site that is operating in the overlay network, and the traditional site that has not yet migrated to the SD-WAN environment.

So the way that that can be accommodated, as you can see in here, where you have the hybrid network which includes the remote site, the datacenter, whatever facilities have been migrated into the SD-WAN environment. And those can seamlessly operate across the hybrid network. Now, at the same time you have the environment that still consists on the existing MPLS network, which in this case let’s call it an MPLS underlay. And these are the non-migrated sites or the services that are provided by your service provider, such as security or unified communication services that are still living in the underlay space.

So the communication over the overlay occurs through the Viptela SD-WAN network. The question is is how is that environment connected to the underlay? So there’s two ways that could be connected. First, it could be provisioned from the datacenters or from the hub locations, where we can have BGP [peering] and BGP routing. We can underlay, which would be the service provider, and the traffic that needs to reach into … between the remote site that has migrated and the site that has not migrated, will pass through the datacenter. The second option is to provision BGP routing and BGP peering directly from the remote site so the traffic does not have to go through the datacenter locations.

So in this case what we’re going to show is we’re going to show the centralization or regionalization of the services through the hub location, which is going to allow connectivity into the underlay. We have discussed and shown a demo of how the connectivity from overlay to the underlay can be provisioned at every single remote site. We’ve shown that through, in doing the recent network field day 13 demonstration. So you’re more than welcome to go or visit that at techfieldday.com and see that demonstration, but here we are going to focus on regionalization of services.

So here we have an environment that includes two sites, a remote site and a datacenter, or a regional hub. Now the overlay has been established so you can see the connectivity between the 192.168.1.0 and 192.168.4.0 [subnet]. It’s already been advertised through the Viptela overlay management protocol, which is the control frame protocol that runs between the vEdge appliances and the vSmart controllers.

Now, there is a traditional site which has the 192.168.3.0 network and that has not yet been advertised into the overlay environment. So what we do, we enable BGP on the datacenter or on the regional hub site, between the vEdge router acting as the MPLS CE device. Again, there’s no requirement to keep an MPLS CE device at that site. And an MLPS PE device … And that BGP peering allows the datacenter or a regional hub device to actually learn about the 192.168.3.0/24 [subnet] from the local MPLS PE device through the underlay.

Would that head in to a regional hub vEdge device? Now it does. It advertised that into the OMP and that gets advertised through the vSmart controllers into the remote at vEdges. And now the remotes site vEdges have a connectivity into the underlay environment. As you can see on the left hand side, the vEdge at the remote site now has both the subnet for the datacenter subnet itself, and the subnet for the legacy site advertised. And it can now send traffic between the overlay site and the underlay sites.

Lloyd: David, just while you are getting to the next session there are a few questions which I will quickly answer. One question is are the Edge devices placed on both sides of the MPLS circuit or only on the Edge site? And the answer to that is, essentially, this is, consider this as your branch router that replaces your existing CE device which is either talking to the internet or the MPLS,  and it becomes a single device that supports all branch routing functionality, as well as WAN spacing functionality.

The next question is, does the Viptela plans provide the local internet breakout, like split tunnel at the branch location, and the answer is yes. We support split panel and a few other internet breakout options which David will show you in the demo.

Our next question is, what security accreditation has the vEdge got? So the answer is we’ve been widely deployed now in thousands website locations in retail, banking, healthcare, and a whole bunch of [verticals]. So we are essentially certified in each of those verticals from the strictest security criteria. So it’s PCI, HIPPA, and you know, a federal-rated certification, so the entire range for industry from a security standpoint.

Next, does the … I think those are read David. You can [take over].

David: Okay, sure, yeah. So let’s get right into this. So the first thing is we said that the BGP session is established between the vEdge device at the datacenter or at the regional hub location with the MPLS service provider. So let’s just quickly take a look at this. If I go into the monitor and I go into the network, and in here I have the datacenter location. If I go into the datacenter and I look at the interfaces, I see this interface which is the interface into the MPLS environment. It has the 10.14/024 network, so remember that one, 10.14.

Now, what I can do is I can go into the real time. I can look at the BGP neighbors that got established. And this is the 10.14 subnet that we were looked at before. You can see that I’m establishing a peering relationship with my service provide. Of course, I’m using private [AS] here because it’s a demonstration environment, but in your production environment you would be using your sort of production [autonomous] system or your serviced provider. So we are establishing a BGP connection with the service provider.

I can also go and I can also inspect the BGP routing table on the datacenter on the datacenter side. And if I search for 192.168.3.0, which is the legacy site that has not yet migrated, I can see that I’m receiving this advertisement and it comes from the [unintelligible 00:18:39] of 10.14.11. which is my service provider. So I know the datacenter has learned connectivity into the underlay site.

Now, let’s see now how that information get’s proper gated through the overlay into the remote site that is an overlay only site, right? So let me go and switch from the datacenter device into the remote site device, and let me look at the IP routing table on the remote site device. And let me search again for 191.68.3.0 subnet. As you can see, this is being advertised, so this tells me that it’s available at the remote site. It’s now being advertised through the OMP protocol and not a BGP protocol, so now this has become an overlay subnet. And it’s been advertised from a site, or from the device with an identifier of 10.10.10.14.
If I go in here and just quickly glance, you can see the datacenter has the 10.10.10.14 identifier.

So we have achieved what we planned to, which is the underlay subnet that is available from the site that has not been migrated and is strictly on the MPLS network is now advertised through the datacenter into the overlay control protocol and onto the remote site. Now I can also go and I can see what happens at the Cisco CSR if I go here. I go into the legacy device and I open a console connection to the legacy device and I do show a [CBGP] summary, I can see that I have a BGP session.

This is the session. This the BGP session at the legacy site, between the Cisco device and the MPLS PE device. If do show a [PBGP], show IP route, and I look at the BGP route, I can see the 192.168.1, which is the subnet for the remote site, and I can see 192.168.4, which is the subnet for the datacenter. Both of those are available through BGP from a service provider. So you can see now that I have bi-directionally allowed that information to be distributed.  The overlay site learnt about the underlay through BGP and OMP, and the underlay site has learnt about the overlay through BGP.

And the last thing that is really left is to perform a connectivity test. So what I’m going to go is that I’m at the remote site. So what I’m going to do is I’m going to actually go and just perform a trace route from the remote site into the host that is connected at the legacy location, at the location that has not yet been migrated. And I want to perform this trace route in VPN-1, which is where my users are. If I let the transfer run I can see that there is a connectivity between the remote site and all the way through the traditional non-migrated site. So I have a full end-to-end connectivity between overlay and underlay sites.

All right, let’s move on to the next case.

Lloyd: David just-

David: Any questions?

Lloyd: Yeah, just before that, a couple of quick questions. Does Viptela routing prefer OMP routs or directly launched BGP routes from [the feed]?

David: So the OMP routers are less preferred than the routes that are advertised though BGP. We want to make sure that that if an information is available through a traditional underlay means that it’s taken first before the overlay kicks in. Now, that question is a little bit complicated because if really depends on the deployment scenario. Viptela comes with a very comprehensive support for routing protocol distribution, tagging, mapping, anything that you can imagine from a mature routing solution on both LAN and WAN site interfaces.

So as Lloyd mentioned earlier, vEdge router is really a fully featured routing device as far as the underlay is concerned. And of course, it has this slew of SD1 features for the overlay routing for the SD-WAN piece of it. So if there’s a specific question about how the inter-variability between the OMP and BGP is achieved, that’s something that we can definitely take off line and have a discussion. But we have it in environments where customers have done a pretty simple no-routing-protocol-at-all deployment, all the way to a highly complicated route and deployment with [look] prevention and mitigation. As I mentioned, large end protocol distribution, tagging, and things of that sort.

Lloyd: Yeah, that’s it. And the second question is what kind of integration do you have with vScaler? So the answer to that is we essentially integrate from a policy standpoint as well as [unintelligible 00:24:03] tunnels are directly booked from the web seller vEdge routers into the [cloud]. Okay, David.

David: All right, thank you very much. Let’s move on to the next one, the regional internet exit. I’m waiting for the slide to advance. All right, so what we’re going to do, as we mentioned, we have the hub locations where we are hosting our services. So our services is not necessarily just a layer of four layers, seven services, such as security services, which is what we’re going to talk next, but it’s also services such as internet, right? And that’s again, what we’re talking about is that have one location that has a Palo Alto Firewall, has also been provisioned to host sort of connectivity into the internet.

So let’s step into this. So what does it really give you, the regionalization of internet exit services? So of course, you can have centralized internet services that are provided from the datacenter, where you have to backhaul the traffic into the data center, and that’s how it exits, which is the traditional way of doing things up until now. You can have a direct internet access, which is exactly what Lloyd mentioned, through a direct breakout from the remote site. That brings a security consideration for either using cloud security services such as the vScaler, or actually having an [on prem] security services such as Palo Alto Firewall, Checkpoint, [Fortinet], whatever the on prem security service is.

And that may or may not work well with an organization with security policies.

So, what we’re talking about here is that having a balanced approach between backhauling everything into the datacenter and breaking out everything straight from the remote offices. What we’re talking about is deploying services in a regional manner through an internet access service in a regional manner, when you have the security inspection point, such as firewalls, provisions at the regional hub locations. And then steering the traffic which is geographically closer to those locations through that regional hub, letting it be inspected by the firewall, and then letting it go onto the internet.

So it provides a balanced security approach. Obviously, geographical distribution, because the … Let’s say a West Coast site doesn’t have to a datacenter that is somewhere Midwest or East Coast. So everything is geographically originalized to provide you the best performance when you’re accessing the internet’s resources. And, as I mentioned, no datacenter backhaul, which is the major issue with currently accessing cloud applications and, just generally, internet resources, because of the increased latency.

So let’s step into this and see how it’s actually executed. So again, I will go back into the vManage. So let me go into the monitor, let me go into the network, let me go into the remote site device. Let me look at the routing information on that remote site device. I can see that I have some default routes in there, right? Let me try to see in OMP information that was received on that device. So I know that I’m getting a different route from the network, right?

So now, let me quickly go into the Palo Alto Firewall that has been provisioned at the regional hub location. Let me login to that.

Lloyd: Right, so while you’re doing that David, one more question here. Consolidating MPLS [V] and backup into a single vEdge, so how is that achieved from a single point of failure? Again, you know, there is one more resource for you to download on our 3,000-site bank deployment. Well, of those 3,000 sites, each site had given the vEdges, but [unintelligible 00:28:50] routers. And each router actually, you know, that is well connected into multiple different kinds of links. So at any given time no single failure could essentially affect the applications. In fact, if it’s a device failure, a network failure, or any form of a link failure will not bring down any of the applications in the network. I hope that answers your question. Go ahead.

David:  Right, yes. Thank you Lloyd for clarifying. There’s a very extensive capabilities that we have for deploying [unintelligible 00:29:21] and in the head end device, it’s actually more than just two devices. So it’s starts with at least two devices to have high availability and head ends will have more that than.

So back into the Palo Alto, so the Palo Alto has sort of an empty log. I on purpose emptied the log in here so you can there is really nothing in the log. So what I’m going to do now is I’m going to go back to vManage and I’m going to go into the policies. Actually, before I do that, let me do one quick thing. This screen in here is actually a client that is positioned on the LAN side of the remote site vEdge. This is your client that is sitting in the remote site.

So what I’m going to do is I’m going to go open the terminal window in here. And before I make it changes into how the internet traffic exits into the internet, I’m actually going to start just a regular [ping] into the local DMS servers. So as the thing is running, let me go back into the eManage, and let me navigate into configuration and pause this. And there is a policy that we have prepared ahead of time that it changes the priorities of the advertised default route. And in fact, what it does is it assigns a higher priority for a default route that are being advertized from the regional hub location, versus the ones that are advertised from the datacenter locations. So we’re not really removing the datacenter location default routes. Those are still in place in case something happens in the regional location and you want to fall back into the centralized internet access into the datacenters.

But we are providing that preferred access for the sites of interest. In this case, this is the remote site. In our topology, we’re providing them the ability to exit through the regional internet exit point rather they go all the way into the central datacenters. So as you can see, the policy has been pushed, right?

So if I navigate back into the Palo Alto and I refresh the monitor screen, you can already see that there is the traffic that starts hitting the Palo Alto Firewall at the regional location. This is the traffic that has now been redirected, or now the internet traffic that instead of going all the way and being backhauled into your centralized datacenter, it’s now being sent through the regional hub location, again, based on the geography that your site is located at. And at that location it gets inspected by the Palo Alto Firewall for security policy enforcement, and eventually goes onto the internet.

If I go back into my desktop, which has been running the ping into the Google servers, you can see the ping still continues. If I stop the ping, some interesting observation here, you can see on the bottom the zero packet loss. So what it means is that we have switched from one internet exit point to another internet exit point, from the backhauling everything into your datacenter, to a regionalized internet access through sort of a geographically close location. And that switchover had occurred with zero packet loss. We have sent 121 [ICMP] packets, and we have received 121 ICMP packets, zero packet loss on rerouting that traffic. In our definition we like to use a definition of enterprise, the grade actually, when that is what our enterprise customers are expecting from a SD-WAN solution.

Let’s go back into our slides and move on to the next use case, regional security perimeter. So what we are addressing is the two more services that we want to introduce at our regional locations. In this case it’s the Palo Alto Firewall and its Snort IDS. In addition to the internet services that we had before, right, in the previous use case, now we want to insert an actual layer, a four-layer of seven services.

So what are we trying to achieve? We are trying to create a regional secure perimeter around our compute resources. So should a security incident happen in one of the locations, such as a user brought in and infected machine. A virus outbreak has started, some sort of a malware outbreak. It’s now a service attack. Anything that is deemed by the security team as a security incident has started. What we want to do is we want to have an ability to mitigate that event without any sort of reengineering work, without deploying firewalls or IDSs at the remote site.

So what we can leverage with the Viptela SD-WAN is the service insertion framework, to be able to take the traffic of interest, not all the traffic but the traffic that you’re really interested in inspecting, and steering that traffic from the remote location into the regional hub location, and let it be inspected by the Palo Alto Firewalls and the IDS appliances, in this case, Snort IDS appliance, or any layer four layer seven appliance of your choosing. And if that traffic is allowed, then the traffic will continue on to its destination in the datacenter, either in non-frame or cloud data center. And if it’s not allowed, the traffic is going to be dropped.

This is a mitigation of a security incident and what we’re going to do in our case is we’re going to have a traffic that is allowed between the SD-WAN site and a datacenter. And then the regional services are deployed and then the traffic of interest will be steered, which is actually the traffic that you’re suspecting that may be the traffic that comes from the security incident. And that traffic is going to be chained, not just inserted, but chained to the two services, the Palo Alto Firewall and Snort IDS in a regional manner; again, regionalization of services to make sure that you are contacting an original hub which is closest to you geographically. No backhauling is required into your main datacenter.

All right, let’s go into the manage …

Lloyd: David, I just want to interject.

David: Sure.

Lloyd: I’m seeing a few Tweets coming in on, with #FutureWAN. So I’m encouraging everybody on this session to Tweet some screen grabs or any comments that you see with #FutureWAN and every single day we’re giving out an Amazon Echo for either the most active Tweeter or the best Tweet. So please, please continue doing that. Thank you.

David: Sure, yeah, thank you. A small [clear up]. I’m actually right now deactivating the policy that we have there for the previous use case so we can start using … Then we can try a different policy in this case.  Now before we do the policy let’s just inspect how the system views those in-service services, right? So if I go into the monitor and I go into the network, and as you can see here, there’s the two original hubs are in here. If I go into the regional hub one and I request what are the services that are advertised from this location, as you can see I have net service one and net service two. These are the two services.

And the reason that you can think about net service one and net service two is that there’s some devices, for example firewalls, they identify trust and untrust zones. So we want to make sure that the security services that we are providing, they are in tune with how the security and security appliances sort of expect the traffic to arrive through trusted and unstrusted zones. So we want to make sure that we maintain traffic symmetry and not break a stable behavior of those devices. So you can see that the net service one and net service two are advertised from the regional hub one location, which is exactly where the firewall is. And if I go to regional hub two and I look at their services in there, I see the services three and four, which is exactly where my IDS appliances are, right?

So what I’m going to do now is I am going to go back into my desktop and I’m actually going to start simulating a denial of service attack. So it’s pretty simple a denial of service attack. It’s basically just a rapid ping that goes from the machine that is sort of like an infected machine at the remote site. And it started it at the 192.168.4.10, which is your datacenter subnet. So I’m basically trying to do a denial service attack from the remote site into the datacenter, right?  And as you can see, the attack is now happening, right?

So what I’m going to do is I’m going to go back into the vManage. I’m going to go into the configuration and policies. And we have a policy that we have created ahead of time that is going to perform a service chaining through both services. One service is in the regional hub location one and the other service is a regional hub location two. So what we’re going to do is we’re going to force the traffic from the remote site.  Instead of going straight into the datacenter, it’s going to steal the traffic, first the regional hub on, let it be inspected by the firewall.

Then receive it from the firewall because the firewall, the way that we configured it that firewall allows the traffic. We want to make sure it hits the second service because if the firewall was blocking this traffic it would never hit the second service. So it’s going to be allowed by the firewall and then continue onto the regional location to get inspected by the Snort IDS. So now the service attack is going to be identified, the traffic is going to be dropped and the [unintelligible 00:39:34] is going to stop.

So we see that policy had been applied. If I go into the Palo Alto VM and I refresh the monitor screen you can see that the traffic, which is the site one to the datacenter, which is really the traffic, the ICMP traffic is being sourced from the site one, the gets allowed. It goes from 192.168.1 to 192.168.4. This traffic is allowed because, again, Palo Alto Firewall is not blocking this traffic. So we are keeping the first service.

Now let’s go and look at Snort IDS. In our case Snort IDS had been integrated into the [PS10] security appliance. Right, if I go into the services and if I go into Snort, and I go into alerts, you can see that the Snort had identified that it’s an ICMP attack in progress between this IP address and the other IP address and, in fact, it has put this IP address into the block list, sort of quarantine that device. So if I go back into my host, as you can see, the ICMP [flood] has stopped. I have mitigated my denial of service attack against the datacenter by chaining this to a Palo Alto Firewall that did not block the service. It did not block the attack but the attack was blocked by the IDS appliance, right?

And just to wrap up this case, as I mentioned at the beginning, what we are doing is we are in fact stealing only traffic of interest. So even though the ICMP traffic was blocked and the denial of service attack was mitigated, I can still go and I can still browse into the datacenter, which would be the same IP address, 192.168.4.10. So I can still browse through the datacenter device because this traffic is not steered toward the Palo Alto Firewall or the Snort IDS device. So that traffic is still going on. So you can see a very selective way for you to have layer four-layer seven-service insertion with regionalization of those services.

So there are a few minutes left. Any questions that we want to take from the attendees and I-

Lloyd: Yeah, please send your questions over. So there’s one that just came in, and that is, essentially, in your deployments does voice and video run over internet links? And the answer to that is yes. In fact, most of our largest deployments are running voice on broadband. And intelligent routing capability ensures that voice quality is, in fact, many times better than what it is on MPLS. And to give you an extreme example, one of the largest backing systems for 911 calls runs on the Viptela, you know, the same network.           So in those cases also are the critical voice calls across international countries are going on broadband. So essentially, the answer to that is yes.

Any more questions?

David: All right, I think we have a few minutes left, so what we’re going to do is we’re going to wrap up the demonstration at this point. Please look forward for our subsequent webinars during the next month of February, to really talk about the cloud security that we mentioned with vScaler and to talk about the hybrid cloud adoption through AWS and [Asia]. And also talk about an external connectivity to business partners, mergers and acquisitions, and suppliers.  So, anything that goes out of your organization and how you can utilize segmentation. Network [unintelligible 00:43:40], extranet services, and service insertion, quite a lot of things that go into the extranet connectivity.

So we will address all of those in the subsequent webinars. I hope you enjoyed the demo.

Lloyd: Yes and please go on the FutureWAN website and you can see all the [V17] sessions any time that you want. It’s on demand. Well, at this stage we are up to the half way mark. But, you know, all the sessions are available on demand.

Also, please participate in the survey, and it would be great to get your insights, and you know you get an access to the survey results. Thank you so much.

David: Thank you so much.

Watch Now