?
Carrier Networking Network Function Virtualization (NFV)
12 minute read

Can Cisco's ENCS Platform really be used for Network Function Virtualization in a Multi-Vendor scenario?

If you type in "Cisco ENCS Platform" into a Google Search today, you will find all kinds of documentation from Cisco going all the way back to 2017 when the Cisco Enterprise Network Compute System (or ENCS) platform was launched into their portfolio.

In This Insight

copy link

Summary

Fast forwarding to 2020, we at World Wide Technology wanted to understand if the Cisco ENCS Platform can really be used for Network Function Virtualization (or NFV) specifically a multi-vendor environment.  Our testing was initiated by a Proof of Concept in our Advanced Technology Center (or ATC) with one of our customers who wanted to validate if they could use the Cisco ENCS Platform as a Universal CPE containing multi-vendor Virtual Network Functions (or VNFs) operating in a Service-Chaining scenario.  This testing could help them decide if they could introduce the solution into their managed services portfolio.

copy link

ATC Insight

We were able to prove that Cisco's ENCS platform could in fact be used as a uCPE and provision Virtual Network Functions or VNFs in a multi-vendor scenario.  Cisco SD-WAN and Palo Alto Firewalls can in fact work together on the code levels we tested from a features and functionality perspective.
 

Our testing included several Data Center and Branch scenarios with single and high-availability environments that we physically and logically built for testing.  The VNFs used within the Cisco ENCS platform were for SD-WAN and Firewall usage.  

 

The VNFs used were:

  • Cisco Viptela vEdge VNF
  • Cisco ISRv SD-WAN VNF
  • Palo Alto VM-300 VNF

 

With SD-WAN, the Cisco Viptela vEdge VNF and Cisco ISRv SD-WAN VNF were tested separately.  They cannot co-exist in a Cisco SD-WAN environment.  The best practice would be to pick one (or the other) of these VNFs for your SD-WAN needs. In terms of firewall security, Palo Alto's VM-300 was used as the VNF for the testing scenarios that were performed.

 We ran a very basic IMIX traffic profile via our IXIA traffic toolset to validate that we could in fact pass traffic properly in the Data Center and Branch scenarios.  If you want to see deeper details around the IXIA Traffic Profiles, click into the Test Tools section in the menu on the left.

We even tested high availability failover within the uCPE Branch scenarios as well.  From a features and functionality perspective, the ENCS and the VNFs passed our tests and worked together at a functional level.  We did not encounter any bugs on the Cisco and Palo Alto solutions which would hinder them from their features and functions working together in a Service-Chaining scenario. Although, there are specific caveats and design considerations that must be taken into account when deploying in this scenario.  Please read further details in the Test Plan/Test Case section in the menu on the left.

Next Steps

Upon the writing of this ATC Insight (11-7-2019) the next step, or Phase 2 of our testing is currently in flight. This time we will be doing performance based testing of this uCPE and VNFs solution.  We hope to help provide you the reader, (and our customer) testing results in a more "like-for-like" and production level scenario.  Stay tuned for a future insight on phase 2 of this testing. 

copy link

Test Case

Functionality and Feature Testing a Deeper Look

The actual testing scenarios that we covered in the testing were:

Standalone ENCS 5400 Environment or Deployment

  • Single vEdge Cloud VNF
  • Single vEdge Cloud VNF service chained with Single Palo Alto VNF
  • Single ISRv SDWAN VNF
  • Single ISRv SDWAN VNF service chained with Single Palo Alto VNF

High Availability ENCS 5400s Environment or Deployment

  • HA vEdge Cloud VNFs
  • HA vEdge Cloud VNFs service chained with HA Palo Alto VNFs
  • HA ISRv SDWAN VNFs
  • HA ISRv SDWAN VNFs service chained with HA Palo Alto VNFs

 

The testing performed could be identified in these major areas:
 
  • Features and Functionality Testing around the Cisco ENCS Platform
  • Features and Functionality Testing around the vEdge Cloud VNF
  • Features and Functionality Testing around the ISRv SDWAN VNF
  • Features and Functionality Testing around the Palo Alto VM-300

 

Testing and Observations: Cisco ENCS Platform

 

Cisco Enterprise Network Compute System

We specifically used Cisco's ENCS 5412s (part number: ENCS5412P/K9) and (O/S Version 3.12.2FC2) for our testing conducted as of (11-2-2019).  We ran a total of 16 different tests on the Cisco ENCS Platform which focused heavily on the initial setup, baseline configuration, and ability to prepare and register images Virtual Network Functions or (VNFs) for use in the platform.

Initial Setup:

An example of an initial setup test that was executed was being able to configure the management interface of the ENCS Platform via the Cisco Integrated Management Controller or CIMC.  All initial setup tests passed our inspection in the POC.

Baseline Configuration:

Some examples of baseline configuration tests that were executed were configuring a Banner and Message of the The Day or (MOTD), configuring Network Time Protocol or (NTP), and configuring Web Portal Access on the ENCS platform.  All of the tests passed our inspection in the POC.

Prepare Images:

These tests focused on the ENCS Platform's ability to prepare images from the Vendor solutions for the Virtual Network Functions or VNFs.  

When ENCS was used to prepare an image for vEdge Cloud, it passed the test but with some specific comments.  We could not get ENCS to prepare the image with the Web UI.  We had to fall back on an ENCS Python Utility which worked just fine to prepare the image. At the time of this writing (11-2-2019), we believe their is a bug in the Web UI packaging software when you are trying to utilize SR-IOV interfaces.  It appears that the WEB UI in the ENCS does not enable a setting that permits SR-IOV interfaces, so when you attempt to deploy and connect to an SR-IOV interface it will fail. 

Preparing an ISRv SDWAN image was out of scope for the ENCS platform because it already provides a packaged version of this VNF.

Finally, when ENCS was used to prepare an image for the Palo Alto NGFW VM-300, it also passed but with a few comments.  Again, because SR-IOV interfaces were being used in the design on the physical WAN side, we could not get the ENCS to prepare the image with the Web UI.  We had to fall back on an ENCS Python Utility which worked just fine to prepare the image.  Again, this is due to our thoughts that the WEB UI in the ENCS does not support SR-IOV interfaces at the time of this writing. 

Register Images:

The Cisco ENCS Platform was able to register all three images, vEdge Cloud, ISRv Cloud, and Palo VM-300 as VNFs without issue in this POC test.

Testing and Observations: vEdge Cloud VNF

 

vEdge Cloud VNF

We specifically used Cisco's vEdge Cloud image (O/S Version 18.4.1) for our testing conducted as of (11-2-2019). We ran roughly a total of 130 tests in regards to the features and functionality of the vEdge Cloud VNF in the scenarios that we discussed earlier in the article.  We obviously will not focus on each individual test that was performed, but we will give some insights and reflection based on the testing that was completed.

Overlay Tunnels in Cisco Viptela SDWAN:

At the time of our testing (11-2-2019) vEdge Cloud only supports establishing overlay tunnels using IPv4 or IPv6 but not both. It will prefer IPv4 if both options are available between sites. Recommend using IPv4 for the transports at this time. 

IPv6 BGP Peering:

At the time of our testing (11-2-2019) vEdge Cloud does not support IPv6 BGP peering.  If IPv6 is needed on the transport you must rely on static routing. This feature is expected to be released in 2020. Due to BGP not supporting IPv6 peering on the vEdge Cloud we are unable to do IPv6 overlays consistently since we cannot advertise IPv6 TLOC tunnel IP space.

Testing and Observations: ISRv SD-WAN VNF

 

ISRv SD-WAN VNF

We specifically used Cisco's ISRv SDWAN Image (O/S Version 18.4.1) for our testing conducted as of (11-2-2019). We ran roughly a total of 130 tests in regards to the features and functionality of the ISRv SDWAN VNF.  Again, we will not cover results of each individual test that was performed, but instead will give some insights and reflection based on the testing that was completed.

IPv6 Overlays and IPv6 Tunneling:

Unlike the vEdge Cloud VNF at the time of our testing (11-2-2019), the ISRv SD-WAN VNF does in fact support IPv4 and IPv6 Overlay Tunnels.  

Multicast Feature:

At the time of our testing (11-2-2019) multicast is not supported on cEdge code for the ISRv SDWAN VNF until 2020.  It is in the roadmap to support this feature.  If you have a need to run multicast through your SDWAN tunnels, this is not an option at this time.

Testing and Observations: ISRv SD-WAN VNF

 

Palo Alto VM-300 VNF

We specifically used Palo Alto's Virtual VM-300 Image (O/S Version 8.1.3) for our testing conducted as of (11-2-2019). We ran 50 specific tests related to the Palo Alto VNF for feature and functionality.  The tests we conducted contained high availability and standalone scenarios.  Testing did NOT include any deep layer 4-7 testing in Phase 1.  We simply had a simple access-list that was "any any" to make sure we could pass traffic.  Deeper layer 4-7 firewall testing will occur in Phase 2 during performance-based testing.

Palo Alto FW Configuration in HA thoughts:

Whether you are using the Palo VNF in conjunction with ISRv SDWAN VNF or the vEdge Cloud VNF, high availability is definitely achievable.  We successfully configured HA settings in the Palo Alto FW VNF.  You need to ensure that preemption is not checked OR you need to block path monitoring requests via primary Palo from going through the secondary Palo. If the LAN interface on the active Palo FW goes down this should cause path monitoring to fail and cause an HA state change making the passive (secondary) Palo FW active. It's possible that path monitoring on the now passive (primary) Palo FW could use the ha-service-net to send it's health monitor checks through the active (secondary) Palo which could then initiate an HA state change if preemption is checked, this would continue to flip back and forth until preemption loop detection kicks in.

Palo Alto FW in Layer 2 Transparent Mode thoughts:

Our setup in the topology for our scenarios required that the Palo Alto FW be deployed in layer 2 or virtual wire mode.  In this case, Palo Alto only supports the use of virtio VNF interfaces in layer 2 mode.  SR-IOV type interfaces are only supported on layer 3 or HA interfaces.  So keep that in mind when you update your Palo Alto FW VNF.  In one of our tests we update the VNF, and the Palo Alto FW VNF then tried to connect to the SR-IOV interface but obviously could not pass traffic.  You have to make sure the Palo Alto is connected to the correct interfaces on the LAN and WAN sides that are compatible with virtio VNF Interfaces.

Palo Alto FW Traffic Failover, when Palo Primary Becomes active again:

We conducted several HA failover tests with the Palo Alto FW VNF.  One of these tests was all around validating if the primary Palo Alto FW would become active again in the event that the secondary Palo Alto FW (that became active previously) loses connectivity on its LAN side interface.  The secondary Palo Alto FW went into a failed state due to path monitoring and the primary Palo Alto FW took the active role. The test passed, but something to note was the HTTP traffic took around 4 minutes to fully recover.

copy link

Technologies Under Test

The overall technology being tested was around Network Function Virtualization or (NFV).  The deeper or more tactical technologies being addressed in our tests were specifically Universal CPE or (uCPE) and Virtual Network Functions or (VNFs).  The Vendors attached to these technologies that were tested in our Advanced Technology Center were Cisco and Palo Alto.

copy link

Test Tools

We used IXIA IxLoad iMix specifically for traffic generation to simulate real network traffic that can prove that routing stateful and stateless traffic works as expected, thus helping to prove that the Cisco ENCS platform can in fact host, manage, and operate multi-vendor VNFs in a service-chaining scenarios.

Example IxLoad Screen Capture of Recorded Traffic Generation in the testing environment.

Ixia Traffic Profile:

  • HTTP/HTTPs (objective: throughput / 20Mbps)
    • flow 1:
      • server prefix: 10.1.201.0/24
      • server count: 50
      • client prefix: 10.3.201.0/24
      • client count: 50
    • flow 2:
      • server prefix: 10.1.202.0/24
      • server count: 50
      • client prefix: 10.3.202.0/24
      • client count: 50
  • FTP (objective: throughput / 5Mbps)
    • flow 1:
      • server prefix: 10.1.201.0/24
      • server count: 50
      • client prefix: 10.3.201.0/24
      • client count: 50
    • flow 2:
      • server prefix: 10.1.102.0/24
      • server count: 100
      • client 10.3.101.0/24
      • server count: 100

Multicast Hammer Profile:

  • Multicast Traffic Generation
    • flow 1:
      • sender: 10.1.201.20
      • rendevous point: 10.1.1.11
      • join groups: 224.50.50.50-59
      • receiver: 10.3.201.20

copy link

Documentation

Here are the physical and logical diagrams which depict the Data Center testing environment and the branch or spoke site environments that we custom built for our testing in the Advanced Technology Center.

Example Physical and Logical Testing Environment on Data Center Side used for uCPE and VNF Testing scenarios

 

Example Logical Environment on Branch Side for Single SD WAN VNF and Single NGFW VNF Testing Scenario
 
Example Physical Environment on Branch Side for Single SD WAN VNF and Single NGFW VNF Testing Scenario

copy link

 

Example Logical Environment on Branch Side for High Availability SD-WAN and Firewall Testing Scenarios

 

Example Physical Environment on Branch Side for High Availability SD-WAN and Firewall Testing Scenarios