In this ATC Insight

Summary

Our Infrastructure Services (IS) organization at World Wide Technology helps customers with deployments of new technologies and solutions into production.  They helped a major Enterprise company refresh and consolidate their data centers. They developed a phased approach that enabled a reliable and seamless migration against an extremely tight deadline with no allowance for downtime. 

In order to make sure downtime would be almost non-existent or as limited as possible, our IS organization used the Advanced Technology Center (ATC) to test out the steps of the data center migration before attempting in the approved maintenance windows.

There are several aspects to the data center consolidation that will be covered which include:

  • Cisco ACI Multi-Pod framework (Initial Phase)
  • Workload Migration Framework
  • F5 Virtual IP Migration
  • Gateway Migration
  • Circuit Cut

If you have a data center consolidation or migration coming up and want to gather more ideas on what to expect, please keep reading

ATC Insight

Background on Customer Challenges

Our customer contained business units that included more than 15 leading wholesale and retail brands. You can imagine the applications and workload sprawl in the data center supporting these business units.  The idea was to organize workloads and applications around five core areas:

  • Auto Auctions
  • Financial Services
  • Media
  • Software
  • International

There were three different migration patterns to note during this journey that needed to be accounted for:

  • single workload
  • local subnet
  • portable subnet

Within the single workload, the Application dependencies could sustain a moderate amount of latency and the team needed to be able to re-configure and re-deploy their workloads in the new datacenter. All of these applications made up of these workloads leveraged DNS for all internal and external traffic. In this migration, latency could be witnessed since there was some hair pinning as not all the workloads were grouped together. However, the requirement during the migration was to make sure that these workloads were grouped together if possible, to avoid latency.

Within the local subnet pattern, the customer wanted to cutover all the workloads for that particular subnet from Data Center A to Data Center B. The challenge was to get the change management approval to move a huge number of workloads at the same time. 

Within the portable subnet pattern, the subnets referenced here were the "public facing" subnets like what would be advertised to the internet. There was a specific migration strategy needed during the circuit cut portion that needed to be clearly defined and tested before hand to eliminate any outage time.

Cisco's ACI Multi-Pod as a framework
Figure 1

The above diagram (Figure 1) shows a high level of this solution where Cisco's ACI Multi-Pod was leveraged as a framework to lay the foundation to meet the data center consolidation goals of the customer. 

The teams in the Advanced Technology Center (ATC) along with the Global Service Provider Team and Infrastructure Team mimicked the above customer environment in order to make the testing a reality. All of the physical and logical gear was designed, built, and executed on within the ATC for this effort of testing.  Even the existing customer's tools like vMotion were made available to help in demonstrating the process and gaining knowledge before the real maintenance windows were to occur.

Data Center A West Coast and Data Center B Middle State
Figure 2

Data Center A (DC A) shown in (Figure 2) is where the servers needed to migrate from. The client's intention was to decommission or "sunset" this data center shortly after the data center consolidation effort was completed. Data Center B (DC B) is where all active servers would sit after the consolidation effort. This is where Cisco ACI Multi-Pod played a big part interconnecting the two data centers together and offered the freedom of migrating the various application components across separate Pods without worrying about having layer 2 between these data centers.

Cisco ACI Multi-Pod framework (Initial Phase)

The entire network (including both Data Center A and Data Center B) run as a single large fabric from an operational perspective; however, ACI Multi-Pod introduces specific enhancements to isolate the failure domains between Pods, contributing to increased overall design resiliency. This is achieved by running separate instances of fabric control planes (IS-IS, COOP, MP-BGP) across Pods.

At the same time, the tenant change domain is common for all the Pods (DC A and DC B), since a configuration or policy definition applied to any of the APIC nodes would be propagated to all the Pods managed by the single APIC cluster. This behavior is what greatly simplifies the operational aspects of the solution.

The ACI Multi-Pod is a migration tool that can be used so that workload migrate seamlessly from DC A to DC B. The Inter-Pod Network represents an extension of the ACI infrastructure network, ensuring VXLAN tunnels can be established across Pods for allowing endpoints communication.

Inside each ACI Pod (DC A and DC B), IS-IS is the infrastructure routing protocol used by the leaf and spine nodes to peer with each other and exchange IP information for locally defined loopback interfaces (usually referred to as VTEP addresses).  

During the auto-provisioning process for the nodes belonging to a Pod, the APIC assigns one (or more) IP addresses to the loopback interfaces of the leaf and spine nodes part of the Pod. All those IP addresses are part of an IP pool that is specified during the boot-up process of the first APIC node and takes the name of 'TEP pool'.

The spines in each Pod establish OSPF peering with the directly connected IPN devices to be able to send out the TEP pool prefix for the local Pod. As a consequence, the IPN devices install in their routing tables equal cost routes for the TEP pools valid in the different Pods.  

At the same time, the TEP-Pool prefixes relative to remote Pods received by the spines via OSPF are redistributed into the IS-IS process of each Pod so that the leaf nodes can install them in their routing table (those routes are part of the 'overlay-1' VRF representing the infrastructure VRF).

(Figure 2) depicts the initial integration of how DC A and DC B looks like. Nexus 7Ks at DC A will be connected physically to the ACI LEAF switches. This connectivity will serve the purpose of Layer 2 out and Layer 3 out.  

In the initial stage, the default gateway of the servers resides on the Nexus 7K while the effective inbound communication is maintained via DC A coming through the circuits at DC A, the Firewall and eventually the Nexus 7K.

There was no change to the location of the VIPS during the initial phase. The ultimate purpose of the integration phase was to verify the ACI Multi-Pod integration and to verify the connectivity between the Nexus 7K at DC A and the Cisco ACI Leaves. 

Additionally, there was no workloads to move in the initial phase. The customer was advised to make sure to verify the full functionality of this integration including but not limited to the following:

  • Verify Configuration
  • Verify fabric membership and topology
  • Verify the IPN
  • Verify the external TEP interfaces on the spine switch
  • Verify spine MP-BGP EVPN

 

Data Center A West Coast and Data Center B Middle State
Figure 3

Workload Migration Framework 

The deployment of an ACI Multi-Pod fabric allowed the ability to extend Bridge Domain (BD) connectivity across separate Pods; as a result, providing flexibility of where to connect endpoints as part of a given BD.  

At the same time, the ACI Multi-Pod fabric supported live mobility for endpoints between leaf nodes of the same Pod or even across separate Pods. 

The step-by-step process required to minimize the traffic outage during a workload live migration event is described in (Figure 3) Also, ACI Multi-Pod fabric supports live mobility for endpoints between leaf nodes of the same Pod or even across separate Pods. The step-by-step process required to minimize the traffic outage during the live workload migration event is depicted in (Figure 3) and below in ten steps:

  1. The VM [WEB-01] migrates between DC A and DC B. Remember that DC A is the DC that needs to be decommissioned at end state
  2. Once the migration is completed, the leaf node in DC B discovers WEB-01 as locally connected and sends a COOP update message to the local spines.
  3. The spine node that receive the COOP message updates WEB-01's info in the COOP database, replicates the information to the other local spines and sends a MP-BGP EVPN update to the spines in remote such as DC A.
  4. The spine in DC A will receive the EVPN update and add the information to the local COOP database that WEB-01 is now reachable via the Proxy VTEP address identifying the spines in DC B
  5. The spine in DC A will sends a control plane message to the leaves in DC A as it was the old known location for WEB-01. The leaf at DC A as a consequence installs a bounce entry for WEB-01 pointing to the local spines Proxy VTEP address.
  6. Any workload that is still existing in DC A which didn't move yet will keeps sending traffic destined to WEB-01 to the old location (Leaf at DC A).
  7. The DC A leaf has the bounce entry; hence it encapsulates received traffic destined to WEB-01 toward the local spines.
  8. The local spine receiving the packet decapsulates it and performs a lookup in the database. It now has updated information about WEB-01, so it re-encapsulates traffic to the remote spine nodes.
  9. The receiving spine in DC B has also updated information about WEB-01's location, so it encapsulates traffic to the leaves at DC B
  10. Once the leaf at DC B receives the packet, it learns existing workload location at DC A

The above 10 steps are what actually happened by moving the WEB-01 from DC A to DC B. Please remember that until this point, we didn't move a gateway. The Gateway of the WEB-01 still existed in the Nexus 7K that resides in DC A.

Outcome of the Workload Migration Test

*Please review the full test document which is attached in the documentation section for a deeper walk through.

 

F5 Virtual IP migration 

 

F5 logo

A VIP is a configuration object on the F5 BIG-IP that allows you to tie together a destination IP:Port combination and process traffic for that combination. Once traffic enters the F5 device, it then gets redirected into the members of the pool. As you have seen earlier, the WEB-01 is considered to be one of these members. In the ATC LAB, we deployed few of these servers in a form of a virtual instances.

Moving the virtual instance using SRM to Data Center B was one thing, however, the front line for external users when communicating with this server would be the VIP that is representing this server.  

Migrating the VIP is one of the very essential steps during this process. Testing the VIP migration process added a lot of value to our customer because it can be a major pain point in a data center consolidation or migration effort if not done correctly.

The VIP was considered to be a new end point that would be learned by the ACI Leaf at DC B. The inbound communication would still come via the DC A circuit, however, when the traffic reached the Nexus 7K, the Nexus 7K would ARP the VIP as an end point. The ARP would reach over the IPN as mentioned earlier by leveraging the multicast tree.  See (Figure 4) below:

Data Center A West Coast and Data Center B Middle State
Figure 4

 

Gateway Migration

As pointed out earlier, this migration consisted of interconnecting the existing brownfield network in DC A (based on STP and vPC technology) to a newly developed ACI POD with the end goal of migrating workloads from DC A to DC B.

In order to accomplish this application migration task, we had to map traditional networking concepts (VLANs, IP subnets, VRFs, etc.) to new ACI constructs, like endpoint groups (EPGs), Bridge Domains, and Private Networks.  The following diagram below shows the ACI network centric migration methodology, which highlights the major steps required for performing the migration of workload from a DC A network to DC B network:

Deployment, Integration, Migration

 

The first step was deployment of the new ACI POD (Greenfield POD) as described in the initial phase.

The second step was the integration between the existing DC network infrastructure in DC A (usually called the "Brownfield" network) and the new ACI POD. L2 and L3 connectivity between the two networks was required to allow successful workload migration across the two network infrastructures.

The final step consisted of migration of workloads between the Brownfield and the Greenfield network. It was likely that this workload migration process could take several months to complete so communication between Greenfield and Brownfield networks via the L2 and L3 connections previously mentioned was key to set up during this phase.

*As a reminder, the ultimate purpose of the ACI Multi-Pod in DC A is just to be leveraged as a modern VXLAN implementation to help extending the VLANs and avoid implementing alternative technologies such as OTV. 

Once all (or the majority of) the workloads belonging to the IP subnet are migrated into the ACI fabric at DC B, it was then possible to migrate the default gateway into the ACI domain.  This migration was done by turning on ACI routing in the Bridge Domain and de-configuring the default gateway function on the Nexus 7K at DC A by shutting down the SVI interface.  

The good news was that ACI allowed the administrator to statically configure the MAC address associated to the default gateway defined for a specific bridge domain.  Therefore, it was possible to use the same MAC address previously used for the default gateway in the Nexus 7K, so the gateway move is completely seamless for the workloads connected to the ACI fabric (that is, there was no need to refresh their ARP cache entry). This is exactly what we did in the ATC LAB. Please refer to (Figure 5) below:

Data Center A West Coast and Data Center B Middle State
Figure 5

 

Circuit Cut

The circuit cut was the final phase of this journey. The ultimate goal of this phase was to cut the DC A circuit and make the DC B circuit the functional one. Obviously, this entailed withdrawing the advertisement of the public subnets in DC A and advertising the public subnets out of DC B. This function needed to be achieved in coordination with the ISPs to make sure the new ISP accepted the newly advertised public subnets.  These public subnets are what our customer considered to be the portable subnets.

In the ATC LAB, we shut down the BGP SVI. This way, DC A would not receive or advertise any routes towards the Internet.

Also note there were other BGP configuration activities that were achieved at the border routers. However, these changes were not complicated at all as it is simply "Network Statements" under the BGP routing process.  

Another note from this testing in the ATC was after the circuit cut, the Cisco Nexus 7K at DC A was not receiving the gateway advertisement any more. The reason behind this was because Cisco ACI does not allow the routes to be leaked from one L3 out to another L3 out. This is not a bad thing, and we just wanted to make sure our customer understood this behavior by showing it in the lab.  

Then, we turned on the BGP and OSPF at DC B Firewall.  When this happened, the default route was propagated towards the Cisco ACI Pod.  The Cisco ACI Pod received the default route from the DC B Firewall.

We noticed that the DC B Core was receiving the migrated subnets from the DC B Firewall. This was valuable information which made sure the other firewalls in the other domains were educated about how to reach the migrated workloads from DC A to DC B.

Conclusion

The Advanced Technology Center (ATC) proved to be very valuable to this effort and to our customer.  The testing platform in the ATC helped confirm most aspects of our Low Level Design (LLD) and our Acceptance Test Procedures (ATP) for the data center consolidation project.  Ultimately, we helped our customer become more familiar with the consolidation and migration process which helped them and us be more successful in the project.

Technologies