3 Ways to Operate Public Cloud Segmentation
In this article
If your organization is like most, you're planning the migration or expansion of application environments to the public cloud. Hopefully, you're also planning a concurrent security architecture strategy for each migration.
Enterprise segmentation is critical for any security strategy and is a must on the list of requirements necessary for cloud migration. If segmentation isn't on your radar, it should be. Very few customers want to become an easy target for potential threats. Ignoring segmentation is one of the quickest ways to do so.
While it's one thing to initially segment an app environment, it's another entirely to operate a segmented environment. This article is intended to clarify the segmentation methods available to you as you assess the appropriate security architecture for your cloud initiative.
We'll cover three common approaches for setting up segmented public cloud application environments — set to manual, host-based and workflow automation models — and touch on the implications for the day-to-day management of each approach.
1. Set to manual
If the environment is simple enough, you might be able to use built-in or integrated controls to manage segmentation in a somewhat manual manner. Examples of environments where such an approach is possible include security groups in AWS, network security groups in Microsoft Azure and compute engine firewall rules in Google Cloud.
Alternatively, if you have an integrated third-party next-gen firewall (NGFW) solution, your mode of operation will likely entail channeling traffic through the firewall using an OEM UI to manage policy.
Manual approaches like these can be effective and simple, especially when combined with a proper labeling structure. But manual approaches often fall short in the long run when it comes to scalability. As environments grow over time and trend toward zero trust or whitelist security models, manually scaling your segmentation approach to match that growth becomes nearly impossible.
Host-based segmentation, a rapidly emerging category of segmentation solutions, is proving to be one of the most effective ways to segment — not only in the public cloud, but in any environment. Illumio and Cisco Tetration are good examples of a host-based approach.
Host-based solutions are formidable because they typically rely on some sort of agent within the workloads themselves and feature the lethal combination of complete visibility and enforcement. Essentially, host-based solutions are independent platforms that provide common consolidated policy, managed either across multiple clouds or on-prem, which is a huge advantage.
These solutions, which feature great visualizations, are usually intuitive as well. Even if you only use a host-based approach for operation and visibility, you'll realize significant value from this segmentation model of operation.
3. Workflow automation
Finally, the most effective approach for setting up segmented public cloud app environments (though sadly, not the easiest) is to embed segmentation into your day-to-day workflows. This means automating segmentation into the entire lifecycle of your applications, including workload provisioning, decommissioning, connectivity requests and whatever else you might need.
While a great deal goes into automating workflows, the two most important strategic elements to consider are a proper labeling strategy and proactive security training.
Having a labeling strategy is a must. Proper labeling will help with whichever segmentation approach you ultimately adopt, and it can make the process of workflow automation much simpler.
Labeling, or what is sometimes called "tagging," abstracts away traditional limiters such as static IP addresses. Once applied, you can much more easily automate policy such as "dev cannot talk to prod." In this example, some workloads are labeled "dev" and some are labeled "prod." If your segmentation rules support those labels as objects, then controlling workload traffic becomes quite flexible and intuitive.
One example of a powerful labeling capability comes from Palo Alto, who uses the concept of dynamic address groups to essentially deconstruct the labeling structure from your cloud and assigns policy based on that structure.
Proactive security training
To automate workflows, you'll also need a significant amount of proactive training with your security and risk teams. The only way to build segmentation policy into your workflows is to understand security nearly as well your security and risk teams do. While this requires constant security communication, education and validation, it'll save you from a great deal of operational headaches later on.
While its main goal is to operate less and automate more, keep in mind that workflow automation does not provide monitoring capabilities. You'll still need the ability to discover broken segmentation, anomalies or segmentation change pre-testing. And let's not forget human error. Workflow automation will not eliminate the need for those security mechanisms, but it can certainly reduce it.
There you have it: three common approaches to operating segmentation in the public cloud.
In reality, most organizations end up using a hybrid of these three approaches. For example, you might automate 80 percent of your micro-level segmentation and leave the remaining 20 percent to manual mode, while using host-based mechanisms for maximum visibility. Or the balance could weigh heavier on host-based if you have limited automation maturity.
In any case, it's undeniable that segmentation architecture is needed for compliance, service assurance and data protection on any platform. And it's inevitable that you'll be forced to weigh and choose the proper balance for operating that architecture. For most of you, this decision-making process is likely already happening.
Hopefully this article has given you a feel for some of the more prominent methods and approaches you can take to operate segmentation in the public cloud.