In this article

VMware vSphere version 7.0 is the latest major release of the immensely popular enterprise server virtualization platform. Part of this new release is an updated version of the vSphere Distributed Switch, vDS 7.0, and introduces the capability to integrate with NSX-T, VMware's data center networking and security solution.

This integration may come as a surprise to many and represents a fundamental shift in how NSX can be consumed by an enterprise. However, it may not be obvious how this shift will affect planned and existing deployments of the NSX-T data center solution. In the sections below, we will explore key points about this integration that are important to understand as well as the benefits it can provide.

How does it work?

Prior to NSX-T version 3.0, the NSX-T Virtual Distributed Switch (N-VDS) served as the primary data plane component of the NSX architecture. Logical segments created in NSX Manager would be built on the N-VDS, as this is where all NSX-based switching would take place on the host. The key point here is that the NSX data plane more or less lived on its own "island," away from the traditional vSphere Standard Switch or vSphere Distributed Switch.

With the introduction of vSphere 7, the NSX-T 3.0 data plane can now optionally be placed directly onto the native vSphere Distributed Switch. NSX logical segments will continue to be created within NSX Manager and will be read-only from vCenter Server. In fact, these logical segments will be represented as port groups in vCenter Server, with a slightly different icon to symbolize they were created in NSX Manager. If an NSX logical segment is created on an N-VDS, it is represented as an opaque network in vCenter Server.

When deploying the NSX data plane, simply point it at the vDS, and NSX Manager will know where to deploy your logical networks. The existing vDS configuration is unchanged, with NSX logical segments coexisting with traditional distributed virtual port groups on the same virtual switch. The vDS is owned by vCenter Server, so it will also own the configuration of switch uplinks and physical interfaces, MTU, and other settings, whereas these settings are owned by NSX Manager when interacting with the N-VDS. 

Be advised that your vDS will need its MTU set to 1600 bytes or greater before you can use NSX with it.

Network Objects as seen in vCenter Server
Network Objects as seen in vCenter Server

What about my physical NICs?

Typical enterprise design

Every virtual switch instantiated on a hypervisor host should have at least two physical interfaces assigned to it for redundancy. These physical interfaces are the entry and exit points for the virtual switch as traffic enters and leaves the host. 

A common enterprise design is to use a dedicated virtual switch for vmkernel interfaces, which handle functions such as management, vMotion, and IP storage traffic, and a separate dedicated virtual switch for virtual machine workload data traffic. Employing separate virtual switches aids in resiliency by ensuring that functions critical to operation of the host itself are adequately separated from virtual machine network traffic.

In that design, each virtual switch would have at least two physical interfaces assigned to it, and these physical interfaces would ultimately connect to the northbound physical network. That design would require at least four physical network interfaces on the host, two for each virtual switch. Following that same design workflow with NSX is relatively straightforward. 

With NSX-T version 2.5 or earlier, the N-VDS would be the virtual switch that handles the virtual machine workload traffic, and would have at least two physical network interfaces assigned to it. As a result, NSX would have no ownership or control over the connectivity for the vmkernel interfaces.

vSphere host with vDS for vmkernel traffic
vSphere host with vDS for vmkernel traffic & NSX N-VDS for workload traffic

Plot twist: Hyper-converged infrastructure

The adoption of hyper-converged infrastructure (HCI), specifically hyper-converged compute nodes, introduced a potential roadblock to the design detailed above. It is common for these compute nodes to be provisioned with only two physical network interfaces for all traffic. 

In this case, we can only accommodate a single virtual switch on the hypervisor host while still maintaining network redundancy. If we wish to use NSX, then the N-VDS will be the sole virtual switch on the hypervisor and will host the connectivity for both the vmkernel interfaces and the virtual machines. 

vSphere host with two NICs and NSX-T
vSphere host with two NICs and NSX-T

From a technical standpoint, this setup will work just fine. However, it is not ideal from a design standpoint for two primary reasons. First, we no longer have the level of separation between vmkernel traffic and virtual machine traffic described previously. 

Secondly, the fact that the host's vmkernel interfaces are now housed on the N-VDS means that the fate of those interfaces is tied to the fate of NSX. A failure of the NSX data plane, however unlikely, could therefore cause these vmkernel interfaces to fail as well. This is an important point to consider, as any data center design aims for maximum resiliency whether that design be for network, compute or storage infrastructure.

Coexistence: An NSX + vSphere story

Now that NSX objects can be created on the native vSphere Distributed Switch (vDS), we are able to mitigate the situation with HCI nodes described above. The two physical network interfaces owned by the host can simply be assigned to the vDS, and we can install NSX on top of it. 

While it's true that we still have a single virtual switch for the host, the network control for the vmkernel interfaces and the virtual machines are separated. Workloads on the host will be connected to NSX-controlled port groups, and vmkernel interfaces will be connected to port groups controlled by vCenter Server. 

vSphere 7.0 host with NSX-T 3.0
vSphere 7.0 host with NSX-T 3.0

Say goodbye to opaque networks

Prior to NSX-T 3.0, NSX logical segments were instantiated on the N-VDS. From a workload vNIC standpoint, these logical segments more or less appeared to be regular port groups. In reality, each of these port groups were actually what is referred to an an "opaque network." 

The concept of an opaque network was introduced by VMware several years ago in an effort to decouple virtual switching from vCenter Server, essentially allowing an entity outside of vSphere to create logical networks. While this provides flexibility from a deployment standpoint, an opaque network can interfere with third-party applications or services that an organization may be using. 

Tools and platforms that rely on pulling lists of objects from vCenter Server, such as Ansible, Terraform and Avi Networks may require modification or rework of automation or orchestration workflows to accommodate these opaque networks. When deployed on vDS 7.0, the NSX logical segments are represented in vCenter as distributed virtual port groups, allowing any third-party applications that pull inventory from vCenter Server to continue operating normally.

Streamlined design

Simplified workload segmentation

A popular use case for NSX seen in many organizations is for securing traffic at the workload level. This leads to what is often referred to as a "Distributed Firewall only" design, where the Distributed Firewall (DFW) is the primary feature in use. This allows for an NSX deployment that does not make use of overlay-backed logical segments, logical routing or NSX Edge nodes. In fact, those components could be skipped altogether.

An important fact to remember is that in order for a workload to be secured by the DFW, it must be connected to an NSX logical segment. Recall from earlier in the article how prior to NSX-T 3.0, logical segments are housed solely on the NSX Virtual Distributed Switch (N-VDS). This means that with NSX-T version 2.5 or earlier, a "DFW-only" design still required use of the N-VDS and creation of VLAN-backed logical segments, and workloads would then need to be connected to those segments. As discussed previously in the article, the required use of the N-VDS can result in design challenges when utilizing hyper-converged infrastructure.

The DFW-only design becomes even more streamlined with NSX-T 3.0 combined with vDS 7.0. A given workload will still need to be connected to an NSX-controlled port group for DFW enforcement to take place. However, because we can now create the NSX port group directly on the native virtual distributed switch instead of the N-VDS, it is possible to build this DFW-only design fairly rapidly, potentially accelerating your organization's segmentation strategy

Reduced complexity

By now you are likely beginning to understand how vDS 7.0 can improve your current NSX-T environment, but how does it impact a new deployment? Greenfield deployments of NSX-T 3.0 will benefit from vSphere 7 in that you will not be required to deploy the N-VDS in the first place. Instead, you can begin immediately with the vSphere Distributed Switch and simply deploy your NSX logical segments onto it.

If there's no need to provision the N-VDS, then that is one less virtual switch in your environment to manage, maintain and ultimately worry about. Take note that NSX Edge Nodes, KVM hosts and bare-metal servers (with the NSX Agent) will continue to use the N-VDS for data plane forwarding as they are unable to utilize the vDS.

It's easy to see how consolidating NSX and native vSphere objects onto a single virtual switching architecture provides a better experience, especially from a management and operations standpoint. A single platform means upgrades are more streamlined, with faster feature development, security updates and bug fixes.

Conclusion

Software-defined networking continues to play a large role in the quest toward infrastructure modernization, with the data center being no exception. Stay up to date with the latest software-defined networking information by following our Data Center Networking topic.

To learn more about how VMware NSX-T, check out our hands-on NSX-T Virtual Lab and our NSX-T Design Workshop.

Technologies