In this article

Network architecture is always evolving, adopting new technologies such as Network Function Virtualization (NFV) to strive toward increased agility and performance.

NFV systems, while complex, offer the promise to deploy network services faster and create more flexibility for an organization to quickly change when desired. But as with any technology, realizing a return on investment is critical.

Given NFV's virtual nature, the need for a ROI is two-fold — from both a hardware and software standpoint. Not to mention, the operational cost with regard to the labor required to build, integrate and efficiently operate the environment.

To capture the full value of NFV, deployments must be validated in an end-to-end architecture — from infrastructure to orchestration.

A detailed view of an NFV stack.

Why do we need NFV systems validation?

Running applications on virtualized infrastructure is not new for IT.

Early adopters of NFV paved the way for the industry to understand the challenges associated with adopting integrated systems and disaggregating the hardware and different software functions into a data center or small form factor premise deployment model.

Open source platforms and commercial products offer a variety of virtualization options to deploy applications across an organization's infrastructure. Network functions have been deployed virtually such as Border Gateway Protocol (BGP) route reflection and even virtual routers performing all the functions you would expect have been available as open source for years.

Some deployments were more vertically integrated stacks where the application defines the infrastructure that powers the service. Other deployments were horizontally integrated stacks, in which the infrastructure is the foundation and the applications must comply to that infrastructure. Many other deployments are a blend and include hybrid cloud options also.

What is new in the NFV world is the scale of the bandwidth required for the applications. This delayed the adoption of network virtualization until options like Data Plane Development Kit (DPDK) became available to help application developers scale network capacity using standard x86 compute platforms.

The ability to virtualize those applications normalized with the adoption of DPDK and we began creating a series of applications tied together across a virtual network much like appliances were tied together with physical cables. This coined the term "service chaining" and brought a level of complexity to the design of disaggregated systems.

Service chaining meant organizations had to consider multiple virtualized network function (VNF) applications that had to be engineered on a platform with limited resources. This drove the need for an orchestration strategy, as the applications could scale and change dynamically creating challenges for humans to manually manage the network.

The ability to characterize the load of the VNF application became critical because systems engineers had to plan resources to scale the environment. Such planning inevitably tied to the return on investment for the infrastructure.

The VNFs themselves have different characteristics but also the different combination of VNFs on the service chain is a metric that needs to be well understood for engineers managing the system to properly plan the environment.

The limit of the resources drives the need to validate the end to end system to ensure it meets the criteria whether the VNFs live on a data center scale infrastructure or a smaller footprint universal customer premise equipment (uCPE) platform at a customer site.

The planned return on investment lining up with the best solution performance is the optimal outcome in planning the deployment.

How we help

The effort to develop and test multiple intersecting applications from the orchestrator down to the performance of the DPDK drivers in the virtual network interfaces is a daunting task for many organizations.

WWT's Advanced Technology Center (ATC) affords organization's to a vast ecosystem of leading technology companies spanning multiple industry sectors. The ATC is staffed with experts in each of the technology domains to assemble the NFV system — from the NFV infrastructure and virtual infrastructure manager moving up the stack from the VNF manager to the automation and orchestration systems — and tie it all together to operate as intended within an organization's network architecture.

Illustration of NFV adoption path.

The ATC includes labs and video demonstrations to help navigate the complexity of different network options, both physical and virtual. Our existing NFV labs can be leveraged to help onboard applications into different environments as our experts help validate NFV systems such as Intel's Next Generation Central Office (NGCO), Red Hat's Virtual Central Office (VCO), Cisco's Virtualized Infrastructure Manager (CVIM) among others.

We also offer the ability to build custom proof of concepts for environments that have unique attributes and offer tailored lab services to help you accelerate the build and test cycle in your organization.

WWT works with your organization once the build and test validation is complete in the ATC to leverage our global staging and integration centers to deploy the full multi-OEM hardware and software stack at scale from universal CPE deployments up to large NFV rack systems.

The goal of any deployment is to make sure the system reaches the destination fully integrated with minimal labor required on-site, which helps maximize ROI and eliminates complexity so organization's can scale to meet the needs of customers, products and business.

Technologies