This article was written by Chaim Mazel at Gigamon.

Over the last few years, organizations have been hearing about Zero Trust architecture (ZTA) as a new focus area coming over the horizon. Fast-forward to today, and organizations are at the point where they must start implementing core ZTA building blocks. However, the various ZTA documents, guidance, and roadmaps are not one-size-fits-all, as each organization's mission, environment, staffing, and needs vary. That said, each organization should not only think about the ZTA guidance from solely a compliance perspective but also identify how it will operationalize ZTA in its environment.

It is key to start with core building blocks and ensure they will fit into your organization's operating model today and into the future as your ZTA matures. You'll need to lay a solid foundation that will enable adaptability, data normalization, and visibility, no matter the shape or composition of your environment.

  • Adaptability – Information technology environments will adapt and change as business, mission, and environmental requirements evolve. You'll need to have constant and consistent end-to-end visibility into your environment as compute evolves and shifts between on-premises physical and virtual compute resources and multiple cloud service providers. The dynamic nature of software-defined networks (SDN) also requires that the visibility fabric be easily adaptable.
  • Data normalization – Data normalization is a core component of building robust, accurate, and broad-based analytics across various data sources for on-premises networks, containers, and multiple cloud providers. Artificial intelligence/machine learning-based (AI/ML) detection is only as good as the data used to train the classifiers. Wide variations of data and sources will make detection classifiers unreliable and inconsistent across an organization's environment. It is crucial to standardize and normalize data sources (such as logs) across all components of the environment so AI/ML-based detection engines can be used to help drive policy-based decisions on user and system behaviors. The proprietary nature of cloud service providers (CSPs) will continue to make data normalization a challenge for organizations that want interoperability.
  • Visibility – End-to-end visibility is another core component of ZTA that should be consistent and unified across the enterprise. Here are critical areas where visibility is necessary:
    • Cloud – Most organizations do or will leverage multiple cloud providers, and each may offer its own native, unique, and mutable log generation tools. Being able to standardize network and application visibility across networks on-premises and in the cloud will allow unified monitoring.
    • Containers – The rapid adoption and flexibility of containers creates gaps in visibility for security teams and gaps in an organization's ZTA. The ability to monitor and extract communication from containers will help prevent them from being a haven for cyber threat actors in your environment.
    • Hybrid – Mixed on-premises and cloud compute environments make it challenging to gain single-pane visibility that is standardized across various and disparate environments.
    • Endpoints – Visibility at the endpoint level offers a wealth of data and information but is potentially mutable if a device is compromised. It is good to cross-reference other data sources to better identify advanced persistent threats.
    • Uncovered endpoints – Endpoints that can't be covered by monitoring software, such as printers, IoT devices, appliances, and other operational technology (OT) devices create blind spots unless a deep observability solution is in place.

Gigamon's Deep Observability Platform offers necessary capabilities that are critical for making your ZTA implementation successful. The Gigamon Deep Observability Platform aligns with NIST, DoD, and DHS CISA's Zero Trust architecture initiatives and will enable your team to achieve a faster and more effective implementation of Zero Trust principles.