Why Lift-and-Shift is a Bad Strategy for Some Companies
In this article
One of the biggest misunderstandings I see with customers migrating workloads to the public cloud is the idea that it will be quick or easy to optimize their workloads once they land. The prospect of getting workloads moved, enabling users to begin learning and taking advantage of all the services public cloud has to offer is almost too good to resist.
While there are certainly some scenarios that predicate an expedient move to the cloud, like an impending licensing renewal or a looming hardware refresh, it often pays greater dividends to consider a more methodical approach.
Most companies start by first moving "low impact" workloads to the cloud like dev/test, back-up or business continuity type capabilities. The thought is that these workloads expose the company to relatively minimal risk and are a great way to build momentum and experience.
While I certainly can't disagree with that logic, it often comes with some additional unintended consequences. Two of the most common (and very much interrelated) results are unpredictable costs and sprawl.
In many lift-and-shift scenarios, companies use their current on-premises instances as a gauge to determine the size and type of resources they will need in their new cloud environment. This is a somewhat flawed approach, as most on-premises environments are architected to account for "peak seasonality" of an application and to accommodate forecasted future growth.
This method of sizing results in unnecessary costs and discounts one of the greatest advantages of cloud computing, the ability to scale up or out at a moment's notice when demand increases.
The second most-often realized unintended consequence, cloud sprawl, occurs when the proper governance and controls are not applied to the new platform. Once organizations unleash the power and availability of cloud, users will take their liberties. Suddenly environments are far larger and costlier than anticipated.
The old adage, "build it and they will come" doesn't always come to fruition, but when it comes to the cloud, rest assured users will line up.
Another common trait often present in lift-and-shift scenarios is the perpetuation of the typical IT silos that exist in most on-premises environments. Companies move workloads out to the cloud in an "as-is" scenario and let existing teams, still intact, operate the new environment. You will see scenarios where the individual teams (networking, security, app dev, ops) continue in a vacuum and fail to see the opportunity to greatly improve operations.
Cross-pollination and understanding how each piece mentioned above influences and informs the other is critical to an optimized cloud environment. When managing on-premises, the network team can operate in a silo, as they know all the data and applications live within their four walls. From there, it is essentially just determining if a "site-to-site" or "hub and spoke" model is most appropriate.
Security teams need to understand what constitutes the perimeter and then develop an appropriate posture. The operations team looks at the above pieces as either frameworks or constraints and develops an operating model accordingly; the application teams either live within their given confines or find "creative workarounds."
In the cloud, everything becomes more complex. Suddenly network teams need to collaborate with the application and data teams, as they need to understand where things will live, who the users are and where those users work.
Security can no longer just look at the perimeter because where is the perimeter when cloud comes into play? Security teams now need to develop plans to secure endpoints, strengthen identity and access measures and potentially consider driving toward a Zero Trust model.
Lastly, the flexibility of cloud and the multitude of options from multiple cloud service providers and ISVs present newfound freedoms to develop operational frameworks focused on how the company should operate as opposed to how it "has" to operate. Done correctly, you can now give application teams the freedom, within reason, to operate as they see fit and move as quickly as they can sustain.
This often leads organizations to "rethink" the way they organize traditional IT teams. Now you can potentially consider moving to a model that looks more like solution teams (sometimes referred to as centers of excellence) that have all the necessary components to build, deploy and run their applications. Achieving a true "own what you create" operating model can be a huge benefit to an organization by removing overhead, reducing R&D cycles and improving time to revenue.
Without some of the traditional constraints of private data centers, processes should be focused on enabling agility and allowing teams to focus on customer needs and innovations. Then how does a company "take the reins off" while still ensuring compliance, security and control?
This is where automation plays such an important role in multicloud architecture: fostering a culture of continuous delivery and innovation, while driving towards centralized repositories and shared communities. By developing microservices that are loosely coupled and easily reusable, you can both improve agility and establish an acceptable level of control. Furthermore, you can then start to develop business logic to determine fit for purpose across platforms, automate some of the decision engines and enable automated workflows to handle project intake.
The analogy I often use with customers is that of a football field or a soccer pitch. It is critical to establish the sidelines and the rules of the game, but why not let your teams run the plays they choose, as long as they stay within the playing field? You can certainly play referee and make sure everyone is adhering to the rules of the game — that is the role of monitoring. Applying too much constraint or control on the cloud takes away one of the biggest and maybe most important aspects of multicloud, and that is flexibility.
Once you have established your "sidelines," you can now think about further optimizing your business by establishing decentralized fixed budgets, executing near real-time impact analysis and managing via KPIs/OKRs. The need for large capital campaigns, laborious business cases and less-than-perfect forecasting is significantly reduced.
Now that we've established the significant advantages and challenges of an optimized multicloud environment, how does a company get there if not lift and shift? It starts by gathering an understanding of your existing environment. You will need to identify several critical factors as part of this process including:
- Business intent and the resulting impact of an application;
- Current platform and composition;
- Unique boundaries of a given application;
- Dependencies across assets contained within that application;
- Shared dependencies across multiple applications;
- The capacity demands on each asset;
- Versioning detail;
- Licensing detail;
- Application users and their access needs/locations; and
- Acceptable latency thresholds.
There are several ways a company might go about gathering this information. For some organizations, they may already have existing tools in their environment capturing much of the above telemetry. Maybe their CMDB is highly accurate and most of the original application developers are still part of the organization and accessible.
Others may need to bring in a partner and/or automated discovery tool to help gather the needed detail. Regardless, it is also critical to spend some time understanding the business impact and implications of these applications beyond what a tool could possibly discover. This is accomplished by "hands-on" interviews with application owners and line of business constituents.
Once all the above detail is accumulated and assembled in a logical order, it is now time to determine what potential changes should occur as part of your migration. This will allow you to logically group applications based on the next steps and organized move groups.
While some may still fall into a re-host or "lift/shift" group, I suspect you will have the majority move to other, higher-value groups like re-platform, refactor, retire/repurchase, etc. This process will also help you determine which workloads are not good candidates for the cloud and should remain on-premise or move to a co-location or "near cloud" scenario.
In summary, while lift-and-shift is sometimes the right answer based on certain circumstances, you could also be exposing yourself to additional and sometimes worse headaches. You might just be moving your problems, limitations and bad behaviors to a new environment. To mitigate this risk, it pays dividends to spend a bit of time considering a holistic approach around how your people, process and tools can and should evolve as you modernize.
Establishing an optimized cloud operating process from day zero can save a company significant time, money and pain. If you would like to learn more about how to create a multicloud strategy please consider scheduling a briefing, or if you would like help with developing a strategy around your people, process and/or platforms, consider one of our focused workshops.