In this article

In the wonderful world of investing in applications and business services—the more things change, the more they remain the same. Despite the surge to multicloud, containers and microservices, we still need to know who is consuming what, how much, how often and how important it is to the bottom line of the various consumers. 

Without this sort of information, at best we can't make decisions on what to upgrade or move first, and at worst we are potentially spending entirely too much for applications. 

An application is more than the sum of its parts

As Figure 1 illustrates below, there is a LOT put into application design. From the aesthetics, cosmetics and ease of use to management and integration with APIs, many aspects should be considered up front. Usually the first consideration is the business function or purpose, and perhaps the most important falls in the last mile of its delivery to consumers. Regardless of where applications live, we still must concern ourselves with a multitude of things for each app we present to end users.

Figure 1: Factors influencing end-user design

As we shift from capital expense (CAPEX) to operating expense (OPEX) and perhaps even move applications between on-premise to various cloud platforms, we need to know as much as we can to make informed decisions and avoid driving cost or experience outside of acceptable ranges.

Types of information we need to know can include, but is not limited to:

  • applications that are consumers of other applications;
  • number of users;
  • frequency of communications;
  • amount and type of data passing between systems;
  • types of systems supporting them; and
  • direction of communication.

This information then becomes an indispensable asset for:

  • assessing risk;
  • investing in security;
  • implementing segmentation without breaking apps;
  • high availability;
  • business continuity;
  • application development justification;
  • application performance/availability/testing/monitoring;
  • business impact analysis; and
  • potential change and impact avoidance, among others.

All the above is just for the purpose of managing or modifying the current application estate, add to that the decision-making required for evolving to next generation data centers or getting out of the data center ownership business altogether—you will no doubt find you are missing many important data points.

Getting from here to there

So how do you ensure the visibility required to handle your tactical and strategic objectives while minimizing risk to the reputation of the organization? Moreover, how do you do so with limited internal resources and expertise to take on the new challenges? 

If you are thinking, "use a discovery tool, they've been around for decades!" you will be missing large portions of important data (that which is not auto-discoverable), potentially at great cost to the business. What sort of things are non-discoverable? Think of the primary purpose of an application—the human consumers. We can of course discover end points that are interfacing with multi-tier apps, but how do we easily map groups within an organization or applications that are critical to line of business, or map SLA and associated penalties? 

These things must be documented by those who know what to look for. They must be accurately mapped and tracked by skilled resources who have partnered with businesses doing the same for years. Luckily, we have precisely these resources.

A problem

By way of example, a one hundred and sixty-year-old central U.S. financial institution just completed a study assisted by WWT to determine optimal disposition of hundreds of applications and thousands of supporting systems. The objective was to exit the data center business to effectively reduce running cost and focus on their core business. 

The team knew that discovery tools, as good as they are, still fail to uncover things that cannot be programmatically discerned from network traffic. Things like recovery time objectives, application and data importance to internal and external customers, plans to sunset applications and hardware are invisible to even the best of tools. In addition, people (not tools) are best equipped to handle conflict of records between data sources; there are several steps to extracting decision making data from multiple sources of record.

The solution

WWT deployed experienced resources to take advantage of the customer's incumbent tools. An aggregation appliance was used to pull in multiple sources of record such as CMDB (configuration management database), telemetry (network traffic) data, run books and other databases used by the customer, as well as subject matter expert interviews to document institutional knowledge. 

This solution is quick to implement and has little to no impact on the environment. It can be referred to as a passive assessment to capture non-discoverable data about the IT landscape. Secondly, an active discovery tool was deployed to best determine utilization of systems in order to identify best cloud placement and more importantly, provide an accurate estimate of level of effort and expected cost to move to the most appropriate cloud.

The result

In addition to updating metadata to reconcile all sources of record, the customer now has cost comparison to run their current environment between a co-lo or multiple clouds and clearly identified areas of work required to get there. Bottom line: they are in a much better position to make the right decisions, impacting the business within the required timelines and accomplishing their objectives. 

Better still, WWT can now effectively and authoritatively partner with them moving forward to assist in their cut-over efforts. So, while things may change in how we deliver apps to our internal and external customers, our focus should remain the same and continue to be vigilant in ensuring the overall experience remains positive.

Learn more about our application expertise. Explore