Cisco ACI: Design to Automate
We all understand the power of SDN automation, but to fully leverage it, we should consider automation as part of the design, not an afterthought.
In This Article
The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.
– Bill Gates
Automation is one of the primary reasons many of us choose to deploy a next-gen SDN solution such as Cisco Application Centric Infrastructure (ACI). Decoupling network software from hardware helps businesses reduce OpEx, increase operational efficiency and application agility, and reduce human errors.
Bill Gates pointed out the importance of an efficient operation. We can take it further to stress the importance of an efficient and optimized design. If we apply automation to an inefficient design, it can potentially magnify the underlying inefficiency as well.
This paper intends to provide some insights into the importance of ACI logical design around the following:
- Some pitfalls in ACI logical designs.
- Design patterns in ACI.
- Applying design patterns in ACI automation.
Why “design to automate”?
“Design to automate” emphasizes the importance of simplified logical design, repeatable patterns and flexibility to optimize automation. Automation is a critical component of the Day-2 operation. Optimizing it will help ensure long-term efficiency and to realize the real value of ACI.
Instead of thinking about automation or even operation after ACI deployment, we should consider both factors as part of the design. “Optimized design for ease of operation and automation” should be a design requirement for every software-defined solution. It will help to build automation solutions much easier later on.
Before diving into the core concepts, let’s first review a few common design pitfalls in ACI
Common ACI design pitfalls
One of the most common design pitfalls is the overlooked naming convention standards at the fabric level. Yet, it’s one of the simplest but effective methods to ensure consistency.
For example, because ACI names are case sensitive, “WEB_APP,” “Web_App” and “web_app” refer to different objects. We want to make sure that naming convention is consistent throughout the fabric, with few exceptions. In this example, both “WEB_APP” and “web_app” are good choices. “Web_App” may present some challenges when enforcing naming standards due to some degree of variance.
Another pitfall is that we did not consider object re-usability during design because it required more planning and a better understanding of systems connected to ACI. For example, vPC Interface Policy Groups (IPG) are typically created as unique objects referencing the end host’s name, which has a few drawbacks:
- If the end host is repurposed or renamed, a new vPC IPG must be created, even if the settings are not changing
- The same IPG cannot be used by any other Leaf ports. The overall number of policies scales linearly with the growth of Leaf ports.
- Naming conventions are more difficult to follow because the end host’s names are usually inconsistent.
To leverage the full potential of ACI, it’s worth spending some extra effort to optimize our design for automation and operation purposes. We will look at a few examples to explain the benefits from ACI’s perspective. Though, the concept itself applies to any software-defined solutions.
A pattern-driven design involves identifying and designing repeatable, simplified and scalable patterns while meeting technical requirements for traffic forwarding.
The first step of ACI pattern design is to standardize naming conventions at both the global and configuration levels. In ACI, policy name consistency is essential because policies are immutable (cannot be renamed once created). To better explain design patterns, we’ll use different vPC IPGs examples, and here is a quick refresher of vPC IPG:
As shown below, an IPG represents a group of configurations to be applied to an interface. In ACI, we can reuse them as templates.
We'll compare the following three vPC IPG examples in detail:
- VPC_INT01_UCS_FI: Reusable for any interface1/1 connected to UCS FIs.
- VPC_201-202_INT01_UCS_FI: Mirrors legacy interface configuration style.
- VPC_MY_UCS01_FI_A: Uniquely defined based on the name of end system directly connected to a Leaf port.
We are assuming that at the global level, the following naming conventions have been appropriately defined:
- All letters are capitalized.
- Word blocks are concatenated with an underscore (“_”).
- Ranges are represented with a hyphen (“-”).
Design pattern #1: Interface and infrastructure based names
Pattern #1 is one of the most optimal IPG designs for 2 x vPC connections with the following characteristics:
- Repeatable for interface 1/1 connecting to UCS FIs (regardless of A or B side).
- Less degree of variance of inputs required from the end user, which allows for simpler automation. (We only need to know the interface number and whether the interface connects to UCS_FIs; we’ll then be able to select a specific template for automation.)
- Naming conventions included specific interface number and infrastructure attached (UCS_FI in this case), which allows for more straightforward operation and troubleshooting.
- “UCS_FI” implies a set of interface policies (template) to be applied, such as port speed, CDP/LLDP/LACP settings and VLANs associated, which allow more templatized designs.
Design pattern #2: Node ID and interface based names
Design pattern #2 mirrors how a traditional switch is configured (interface by interface). It is a variation to design pattern #1, except that we added leaf node IDs, which means the number of IPGs = (number of leaf pairs) x (port count per leaf).
The only downside is that it’s not repeatable and scalable, but still adequate for automation.
Design pattern #3: Unique IPGs
Pattern #3 defines each IPG uniquely based on the systems connected to the Leaf port. Though it offers excellent granular control and some ease of troubleshooting, it is challenging to automate, if not impractical.
For operation, this design pattern often mandates a “Naming Standard” document so standards can be followed more consistently because of the high degree of variance in the naming convention.
Now that we’ve discussed three different vPC IPG design patterns, we’ll look at how they are applied to automation and compare the differences. We’ll only compare pattern #1 and pattern #3, since pattern #2 is just a variation of #1.
Automation with templates
Below is the Ansible module to create vPC IPGs:
Imagine that we have to create 48 of them for 1 pair of Leaf switches. We would at least run this task 48 times.
Using Design Pattern #1, our Ansible playbook would look like below (partial):
To create UCS FI policies, we only require the user to specify the interface number. At most, we need 48 vPC policy groups to cover all Leafs, because IPG policies are reusable across different Leafs. This pattern is ideal for greenfield compute deployment.
Note: This design pattern is not limited to UCS_FI, but rather any 2 x vPC connected devices. The other common design patterns include ESXi, HCI or even bare-metal servers. Again, including the types of infrastructure in the naming convention helps align with domain and VLAN pool allocation.
Using Design Pattern #3, our Ansible template would look like below (partial):
Note that we are no longer using a template variable because the IPGs will be created case by case. The user must enter all inputs and make sure that the naming convention standards are strictly followed. Because of the high degree of manual effort, mistakes are more likely to appear.
As shown above, pattern #1 allows for a more templatized, repeatable and scalable design compared to pattern #3. Though in different scenarios, both design patterns can be adjusted accordingly. Design pattern #1 is still a better option in most cases.
Lastly, design patterns apply to all areas of ACI logical design. The vPC IPG access policy is just one of many ACI objects with a high degree of design flexibility.
Final thoughts on design and automation
Before we start deploying more automation in the data center, let’s not overlook the opportunity to unleash the full potentials of software-defined solutions by implementing an automation-friendly design first. Don’t let automation and operation become an afterthought.
Software design patterns may be a new concept to network engineers, but an important one to understand to maximize operational efficiency. In traditional networking, we would put together many standard documents, configuration templates and guides in Word and Excel. With software-defined solutions, standards and templates should become part of the design itself. They should be maintained and updated via a centralized repository through team collaboration; this will be an essential step towards Infrastructure as Code (IaC).
Lastly, it is important to keep in mind that automating a complex configuration does not make the configuration any less complex. Instead, it hides the complexity by introducing complex logic into automation. A well-designed automation solution should be easy to maintain both at the code level and the configuration level.