Building Network Automation into Your Organization
In This Article
Managing the proliferation of applications deployed throughout an organization has become one of the largest challenges IT will face in the coming years.
Applications are everywhere you look and are driving business at a rapid pace.
Different approaches to application management are usually discussed based on employee skillset and maturity of the network infrastructure. However, regardless of the existing environment, organizations executing work sessions intended to define a process workflow and develop a solution often result in the same two high-level strategic outcomes: migration of business-critical applications to public cloud or development of an automation-first approach.
Both require supported application services to align and scale with the application, whether they exist on-prem or in a public cloud, meaning every organization should look to build automation into their infrastructure.
When you're selecting components to create an automation framework, you first need to identify several high-level process objectives first. Examples of such objectives may include enabling developers to easily access and consume application services, which results in reduced friction when deploying changes into a hybrid network infrastructure; enabling the network operation teams to define, build and schedule these services; or providing insight and enabling action by leveraging data analytics.
Clearly identifying and agreeing upon your objectives will allow you to better adopt a unified focus when building automation into your organization.
There are three key areas within every enterprise automation strategy:
- The automation framework
- Monitoring and actionable analytics
- Corporate governance (security) and driving compliance
For this article, we will focus on the first aspect: the building blocks of a solid automation framework.
Steps to creating an automation framework
An enterprise-wide automation strategy must benefit the individual first. But how can this be accomplished within a large organization which is fragmented by function?
First, identify two or three network operation tasks that are time-consuming or repetitive, adding complexity and cost to the business. To be successful when automating operational tasks, IT leaders must develop greater transparency into the utilization and operation costs for specific network services. This involves a fundamental shift from managing technology to becoming a steward of business technology. One approach to this type of change is creating a value stream map of the organization.
Value stream mapping (VSM) is a helpful tool in identifying waste, offering ways to improve in areas critical to the business. It has emerged as a preferred way to support and implement Lean IT, which is an approach for creating cost-effective, agile and flexible process management.
After identifying tasks as opportunities for automation using a method like VSM, building a platform that is extensible is the next step. A good framework will allow you to easily automate each task to support the individual. Upon successful implementation, the operations team should be able to develop and extend functionality on top of the platform by leveraging a scale-out data model where data is stored and accessed (potential candidate for micro services if framework is distributed across hybrid cloud).
This is similar to the approach taken by most object-oriented languages, where the concept of re-usability is used to perform some function through an inheritance model. Using a task as part of a module or role is synonymous to a function defined as part of a class (program-code-template that defines what these functions can or cannot do).
Lastly, a successful framework will need to incorporate monitoring capabilities for greater visibility into application performance with respect to business need. This drives compliance to align with key objectives and creates the foundation for an enterprise strategy that supports transparency between developers, network and security operation teams.
As automation and orchestration (A&O) tools evolve, leveraging a holistic open automation toolsets, such as Ansible by Red Hat, can be a crucial component within an organization's framework. In this case, differing vendors collaborate and contribute to libraries that provide key building blocks when crafting a successful automation strategy. Benefits include ease of extensibility with multi-vendor support, flexibility to change with the networked ecosystem and the development of integrations for provisioning core services such as IPAM, DNS or DHCP.
At WWT, we envision a framework that supports modularity by offering an abstraction to the underlined libraries that are at the foundation of the Ansible toolset. For reference, refer to this video created by Joel King, Principal Architect with WWT, which describes an example of an automation journey.
Bridging the gap between developers and operations
As part of an A&O platform, imagine an environment where network and security operation folks are software savvy enough to promote YAML configuration to a source control, such as GITHUB or Perforce. Each configuration file defines a data model which can be archived in a central datastore, such as MongoDB, as part of a *CI/CD pipeline to push and store these configuration elements.
Some examples include multiple environments based on data center region (test-boston-dc, dev-boston-dc or test-aws-east, prod-aws-east); global data variables which define server properties for ESX, AWS (Amazon Web Services) resource in a virtual private cloud; service levels for license offerings if considering a software enterprise agreement for virtual machines; or even a list of web application firewall policies for differing attack signatures.
Now let's extend this to enabling application developers to easily fill out a simple web page or promote a work order YAML to access these configuration resources developed by the operation folks. The outcome is the execution of an automated task (Ansible playbook).
An example workflow would be that once a developer has completed coding, they may select an environment to deploy to, identify the license level for a clustered pair of virtual machines and select a security policy to protect the application's http/s traffic against vulnerabilities. This is all as part of the CI/CD pipeline, which chains these tasks for execution.
Increase effiencieny and collaboration
If we can enable developers to deploy application services in a simplistic manner and empower the network and security teams to develop the software necessary to build these elements, the result would be greater efficiency and a more aligned process between developers and operations. An analogy would be like providing tools in a tool box for developers.
To build on this, further integrations as part of the framework when supporting core services provide the operations teams greater functionality and visibility when deploying or managing these services. Some examples include IPAM to assign next available IP addresses defined in an address pool for available subnets via DHCP, creating and updating DNS or periodically scheduling updates to monitor the health of the applications and its supporting services (possibly leveraging Splunk).
With all of the above in mind, a defined enterprise strategy around automation will accomplish the goal of aligning applications with services, thus enabling developers to manage resources defined by the network and security operations teams. Then automation takes care of the rest.
Continue the conversation by posting your automation challenges or successes in the comment section below.
To learn how to create an actionable analytics solution, check out the second article in this series, Creating Your Actionable Analytics Solution.
*CI/CD, or continuous integration/continuous delivery, is a process in which developers can merge working copies of software to a local repository and ensure that delivery and deployment are invoked in an autonomous manner. Automation servers are used to monitor for changes and chain tasks together using a concept called pipeline. One example of an automation server is Jenkins.