In this article

What is NSX Federation?

VMware's NSX-T was released in 2016, and since then, network architects have been starving for multi-data center support similar to what NSX-V provided. Since that time, NSX-T supported multiple data centers but was treated more like a single availability zone with specific latency and design requirements. With the latest major release of NSX-T 3.1, Federation has become a reality. VMware now has a centralized Global Manager cluster that centralizes network and security policies across multiple NSX-T Manager domains. Customers now have the ability to provision NSX-T policies that are centrally managed but deployed across multiple availability zones. 

People familiar with Cisco ACI will immediately try to compare Federation to Multi-site Orchestrator (MSO). MSO centralizes ACI policy across multiple data centers and fabrics, treating them as separate availability zones like the NSX Global Manager (GM). I will address some of the comparisons and contrasts later in the article. 

Some people will say Federation was released in 3.0, so why is it a big deal now? The reality is 3.0 was not production-ready. Some of the challenges were a lack of redundancy, scale, and features. Now that 3.1 is released, VMware has improved all three of those issues.  

Federation use cases

An important point to keep in mind for use cases is that Federation is in its infancy and will become more feature-rich over time. Currently, the use cases are operational simplicity, consistent policy management, and disaster recovery. 

Federation allows you to have a single location to manage NSX configuration and policy. This simplifies operations considerably across multiple NSX Manager clusters or sites. Adding to the simplicity is the ability to provide consistent configuration and policy across the sites. A network team can now deploy multiple segments, T0/T1 gateways, security groups, and firewall rules across multiple sites quickly and efficiently. Imagine moving a workload from one location to another without worrying about firewall rule sets or the IP addresses associated with that workload. 

Simplified operations and consistent configuration/policy
Figure 1: Simplified operations and consistent configuration/policy

The last primary use case is disaster recovery. Federation allows us to stretch our segments and gateways across multiple sites. This allows an organization to stretch Layer 2 (L2) across multiple locations in the event of a disaster. In this design, there would be a redundant cluster of Global Managers as well. When the primary site fails, all segments, routing, and policy management could fail over to the alternative location.

Simplified DR
Figure 2. Simplified DR

A very important take away on this use case is that Federation is not intended to provide L2 stretch in an Active/Active data center design. Federation does not provide 32-bit host routes to allow for the granular routing desired in L2 stretched fabrics similar to EVPN.

How does federation work?

We will keep this very high level as we can spend hours getting into each scenario and how every design option works. 

Let's start at the management plane. There is a Global Manager cluster of three virtual machines that is always active, and then a standby cluster. The active GM always keeps the standby GM synchronized. The active GM cluster then pushes policy out to each of the local NSX Manager clusters. GM will push configuration information that is only pertinent to that location. The NSX Managers will communicate with each other, keeping NSGroup information synchronized. An important note is that GM can only modify objects created by GM. NSX Manager cannot modify those objects. GM does not see objects created by the local NSX Manager. 

GM allows an organization to stretch T0 and T1 gateways (routers) across multiple locations. You must do that within GM. There are two options for the deployment of a stretched T0. First is Primary/Secondary, in which one site is the primary forwarding gateway for most traffic. A secondary site will become a primary gateway in the event of the original primary gateway failing (figure 3). The second option is All Primaries (figure 4). In this design, all locations are used for egress, but most ingress traffic will come in one location. This is due to how each site's NSX Edge gateway is advertising BGP using AS-Prepends. Each site still has the ability to use the combination of Active/Active or Active/Standby for their Edge clusters except for Active/Standby in an All Primaries configuration. 

Primary/secondary
Figure 3. Primary/Secondary
All primaries
Figure 4. All primaries

Let's look at a Primary/Secondary example. In Figure 5, we have three locations with a T0 and T1 gateway stretched across all sites. The 10.1.1.0/24 and 10.2.1.0/24 segments are stretched as well. All sites are utilizing active/active Edge clusters for BGP peering to their upstream routers. Federation uses Primary/Standby with the location on the left being primary, and the other two locations are standby.  As shown below, each site is learning the default route (0.0.0.0/0) and a single route (3x.0.0.0/24). Each location has local route entries for the 10.1.1.0 and 10.2.1.0 routes. They also have a local route entry for the 3x.0.0.0/24 route that was learned. To reach the other 3x.0.0.0 route, you will then be routed to the specific site that learned that route. The routing that is different is the default route. Even though each Edge has learned the default route, it is only accessible through Loc1 on the left. All ingress/egress traffic using the default route will go through Loc1 until there is a failure at Loc1. Note that the other two locations perform AS-Prepending (the + and ++) on the 10.1.1.0 and 10.2.1.0 routes. Therefore those two sites are not preferred paths. 

Primary/secondary routing
Figure 5. Primary/Secondary routing

Now, let's compare that to an All Primaries configuration that is very similar. In Figure 6, we have a similar layout as mentioned above. The T0 and T1 gateways are stretched, as are the 10.1.1.0 and 10.2.1.0 segments across all three sites. The Edge clusters are active/active for the T0 gateway. Much of the routing is the same as the first example with the 3x.0.0.0, 10.1.1.0, and 10.2.1.0 routes. The key difference is how the default route is handled. Each location will take its local egress for the default route. The return traffic will come back through Loc1, though. This is due to the AS-Prepending occurring at Loc2 and Loc3 for 10.1.1.0 and 10.2.1.0. This creates asymmetric routing, which can bring a whole different set of challenges. 

All primaries routing
Figure 6. All primaries routing

Again, this is a very high-level overview. WWT provides workshops and briefings to help understand NSX Federation in-depth. There are many design options, as you can see, that translates into many different kinds of traffic flows that would make this article very long. If you are interested in a deep dive on this part of Federation, please reach out to your WWT account team to schedule a workshop on this topic. There are many nuances and cautions to be aware of with Federation, and the workshop can help address some of those.

So what is missing from Federation?

As mentioned earlier, Federation is in its infancy as a feature and has some limitations. We touched on the routing not being the same as EVPN for localized ingress/egress routing with /32 host routes. That becomes a more significant issue if you look for L2 stretch with active/active workloads in multiple sites. Scaling is a current limitation for Federation. Today (as of version 3.1.1), it supports four locations and 650 hypervisor nodes across those four locations. These numbers are changing quickly as 3.1 supported 256 hypervisors. Keep in mind NSX has a limit of 1024 hosts at a single location without Federation. Hopefully, these numbers keep moving up as they have been. Below is a link for the configuration maximums for the NSX 3.1.1 Federation feature.

NSX Configuration Maximums 

Some features are lacking support as well. The key features not supported are multicast routing, VRF instances, and native cloud. The last support challenge to keep in mind is that Federation support will always be behind for VMware Cloud Foundation (VCF) or VMware Cloud (VMC) in AWS or Azure. Currently, it is not supported for VMC in AWS or Azure, while VCF was just released the week this article was written with version 4.2 of VCF.

What's next?

Federation is now supported for production. The key is understanding the use cases, how it works, and its limitations. If designed properly for the proper use case, it is a great solution. It is imperative to understand Federation in depth before moving forward. Contact your WWT account team to assist with an NSX Design Workshops to cover this topic in depth.

Technologies