Cisco ACI Multi-Site vs. Multi-Pod
Understanding these two powerful yet distinct architectures is paramount to deriving maximum value from your data center network powered by ACI.
Since its introduction in 2014, Cisco's Application Centric Infrastructure (ACI) has evolved to become the heavyweight contender in the data center SDN space. Through the ACI Anywhere vision, ACI now provides flexible capabilities to leverage the ACI policy model in a single data center, multiple data centers or in the public cloud.
Two salient options within that vision are ACI Multi-Site and ACI Multi-Pod. Organizations can utilize these two capabilities to expand, secure and interconnect data centers located all over the world.
Much of the confusion pertaining to these two architectures stems from a misunderstanding of the terminology involved. We will define some of the common terms involved which will make understanding these capabilities much easier.
First, we should define the term "Fabric." From the standpoint of ACI, a Fabric is a spine-leaf topology of Nexus 9000 series switches with a single cluster of Application Policy Infrastructure Controllers (APIC). Recall that the APIC cluster is our single point of management for the ACI fabric.
We also have the term "Pod." A Pod is a set of interconnected ACI leaf and spine switches that are under the control of a specific APIC cluster. The result is that an ACI Fabric could be comprised of multiple pods, and all of these pods would be considered part of the same Fabric as they are all under the control of the same APIC cluster.
What if you have multiple fabrics, each with their own APIC cluster? These independent ACI fabrics would correctly be considered "Sites." A "Site" in the world of ACI is a single fabric that may or may not have multiple pods.
What is ACI Multi-Pod?
The ACI Multi-Pod architecture is essentially the capability to expand a pre-existing ACI fabric without needing to deploy a brand new fabric from scratch. An ACI Multi-Pod Fabric consists of two to 12 ACI pods that are connected via an Inter-Pod Network and managed by a single APIC cluster.
Multi-Pod is an evolution of an earlier ACI design termed “Stretched Fabric.” In this design, separate spine-leaf topologies would be interconnected via a "transit leaf". However, this design has heavy scalability limitations and does not offer the resiliency benefits that are had with Multi-Pod.
Architecture, features and benefits
Multi-Pod provides organizations full resiliency at the network level across pods while remaining functionally a single ACI fabric. Multi-Pod is an easy method for extending a data center network with minimal administrative overhead.
Connectivity and control
From a data-plane standpoint, all the pods within the topology are interconnected using an IP routed Inter-Pod Network (IPN). The IPN is not managed by the APIC, instead the user would configure it separately. Connectivity within each pod to the IPN takes place on the spine nodes, but there is no requirement to connect every spine to the IPN. All inter-pod traffic is encapsulated with VXLAN. Multi-destination traffic is dispersed to the pods via multicast, so there is a requirement for the IPN to support PIM bidirectional mode multicast.
The control-plane between the pods leverages MP-BGP EVPN. This is how endpoint information is advertised between the pods so that communication from an endpoint in one pod to an endpoint in another pod will be seamless.
Ease of administration
Recall that a Multi-Pod topology is considered a single ACI fabric, thus all of the leaf and spine switches deployed across the pods are under the control of a single APIC cluster. This means that each fabric and its associated pods are considered a single administrative domain. The individual controllers themselves may be dispersed among the pods.
A perfect example would be if you were to deploy an Endpoint Group (EPG) on your fabric. That EPG would be available across all pods, giving you flexibility of where to deploy virtual machines, containers, and bare-metal servers. It is also possible to create change domains, limiting the scope of your configuration if you have a requirement to do so.
While a Multi-Pod fabric should be considered a single availability zone, each pod does run its own instance of several control-plane protocols, including COOP, IS-IS, and BGP. This provides resiliency in that a control plane failure in one pod will not affect the operation of other pods. Design considerations still apply here as it relates to placement of the APIC nodes and maintaining the state of the cluster.
One primary use case for an ACI Multi-Pod deployment is for enhanced scalability of a large data center footprint. A fabric with a single pod can only scale so far, so it might make sense to add a second pod, even if the entirety of the data center network is under one roof.
Campus data center deployments are also quite common, and this would be another solid use of the Multi-Pod architecture. Colocation facilities such as Equinix often have multiple buildings that comprise a single logical data center and your network may be located in multiple areas. Deploy an ACI pod in each building and you will have a single logical fabric with great resiliency.
Some organizations are homed in a single metropolitan area, so it's quite possible they have a primary data center as well as a disaster recovery site within the same geographic region. With the correct inter-data center connectivity in place, a Multi-Pod deployment could make managing the disaster recovery site much easier as all pods are under the same administrative domain, thus avoiding configuration inconsistencies impacting Recovery Time Objective (RTO) and Recovery Point Objective (RPO) estimates.
What is ACI Multi-Site?
ACI Multi-Site is two or more ACI fabrics (each managed by its own APIC cluster) that is managed as a unit using the ACI Multi-Site Orchestrator. Each ACI site consists of a single APIC cluster managing a spine-leaf fabric.
Cisco ACI Multi-Site arose from a requirement to provide complete isolation, both at the network and tenant change domain levels, across ACI networks. While similar to Multi-Pod, ACI Multi-Site represents a different architecture with its own use cases.
Architecture, features and benefits
In a Multi-Site topology, each fabric could be considered a separate availability zone. These availability zones are managed cohesively by the Multi-Site Orchestrator. The nature of the architecture ensures that whatever happens to one availability zone in terms of network-level failures and configuration mistakes will not impact other availability zones. This guarantees business continuance at the highest level.
Connectivity and control
Like Multi-Pod, ACI Multi-Site utilizes VXLAN for data-plane communication between the sites and MP-BGP EVPN as the inter-site control plane. One difference, however, is how Multi-Site handles multi-destination traffic. While Multi-Pod utilizes multicast in the Inter-Pod Network, Multi-Site uses head-end replication, decreasing the requirements of the upstream Inter-Site Network (ISN). Connectivity to the ISN is made from the spine nodes to an upstream set of routers and it is typical to choose a subset of spine nodes for the physical connections.
Data center interconnect made easy
Modern applications are pervasive and can be deployed anywhere. The primary goal of a data center network is to provide fast and reliable communication paths for these applications. Many organizations have applications with specific connectivity requirements, such as Layer-2 adjacency between application tiers. While this type of requirement is not uncommon, the prospect of stretching a Layer-2 domain across data centers is a daunting one.
ACI Multi-Site makes extending a Layer-2 domain across data center boundaries simple, without incurring the risks of doing so. Different organizations have differing needs, so this Layer-2 extension can be created with or without flooding of multi-destination traffic. Some organizations may only require simple Layer-3 connectivity between their data centers, and this too is easy to configure with ACI Multi-Site.
With Multi-Site, a new component called the Multi-Site Orchestrator (MSO) enters the mix. The MSO is a cluster of virtual machines that are operating a Docker Swarm, and serves as a centralized point to monitor the health of your ACI sites and apply configurations to multiple sites at once. Utilizing a robust API, the MSO communicates with the individual APIC clusters at your sites, including any presence you may have in AWS or Azure. That's right - we can deploy ACI policy into the public cloud.
The MSO is used to deploy objects and policies to the independent ACI fabrics using a combination of templates and schemas. This makes it easy to complete tasks such as creating the same tenant at multiple sites or ensuring the same contracts are in place across sites to secure your applications. The MSO also brings the benefit of reducing complexity by automating the configuration of the connectivity between sites, including the EVPN control plane.
The most prevalent application of the ACI Multi-Site architecture is to interconnect multiple independent, geographically-dispersed data centers. Through the MSO, provisioning multi-tenant Layer-2 and Layer-3 connectivity across sites is made easy.
Through the connectivity described above, Multi-Site also supports disaster recovery scenarios where IP mobility across sites is required. This entails configuring the same IP subnets in multiple sites without Layer-2 flooding across sites.
A third use case is to use Multi-Site to create a highly scalable active-active data center by using stretched Endpoint Groups (EPGs) and Bridge Domains. Layer-2 flooding is enabled between sites, allowing for live virtual machine migration. When designed properly, an endpoint could truly live anywhere with this use case.
Which is right for my data center?
Determining whether you should deploy Multi-Pod or Multi-Site will be based primarily on what your use case is and the nature of the connectivity between the locations in question. For Multi-Pod specifically, the pods must have no higher than 50ms of latency between them. Within a single building or on a campus, this should not be an issue. However, if your locations are to be further apart where the latency is close to or exceeds 50ms, then Multi-Site would be a better design choice.
Scalability is also a concern worth noting. An ACI Multi-Pod deployment can scale up to 12 pods (as of ACI 4.1) with a max of 400 leaf switches in a single deployment (200 leaf switches maximum in a single pod). If you are running close to these numbers, then deploying a separate ACI site would allow you to scale beyond these figures.
If data protection is a requirement, you might consider ACI Multi-Site as the inter-site VXLAN traffic can be optionally encrypted using Cisco's CloudSec technology.
It is a common misconception that Multi-Site somehow supersedes Multi-Pod or that the Multi-Pod architecture is no longer relevant. In reality, they are two separate technologies with differing use cases. What's more, Multi-Pod and Multi-Site topologies can work together! What this means is that you could have data centers all over the world, each with their own ACI Multi-Pod fabric and tied together through the Multi-Site Orchestrator. These two architectures are built to work harmoniously, so you are no longer faced with an either/or decision and will ultimately have a high degree of deployment flexibility.
Cisco ACI has been a transformative solution that has brought many data centers into the world of software-defined networking. Properly scaling and extending a data center network is a challenge in many enterprises, and ACI aims to make that process much more straightforward. The Cisco ACI Multi-Pod and Multi-Site architectures are just two of the methods an organization can use to deploy ACI and extend policy in a flexible way in a multitude of locations. Learn more about Cisco's ACI Anywhere vision.
If you'd like to know more about Multi-Pod, Multi-Site or ACI in general, check out our ACI Design Workshop or contact your WWT account manager for a demo of these technologies.