Modern on-premise networks generally comprise a mix of hardware and virtualized platforms. These include servers, desktops, virtual machines, mobile devices and containers with internal and remote users.
As companies continue to grow and workforces get more mobile, it’s becoming increasingly difficult to define where the network edge lies. Add public, private and hybrid cloud to the mix and an organization’s inventory list of network and service assets can quickly change, often in seconds.
Given the complexity of a modern enterprise architecture, it’s easy to see why companies have such a hard time answering a few simple questions:
- What’s on my network?
- What’s it doing?
- Should it be doing that?
The ability to answer these questions forms the basis of a solid cloud asset management strategy. Let’s explore how to get there.
What’s on my network?
This first question takes a broad view that encompasses people, data, software, hardware and traffic flow. It’s meant to capture what an organization owns and/or controls.
What's on my network?
Answering this question also helps form the foundation of an overall management strategy. Since it’s difficult, if not impossible, to build a management strategy around the unknown, the first step toward knowing what’s on your network involves user and asset discovery.
In Amazon Web Services (AWS), for example, user and asset discovery is partially addressed through AWS Identity and Access Management (IAM) for defined users and roles with policies, and through AWS Config for cloud asset inventory. These information sources give us an overview of who is allowed access to what existing resources.
As cloud deployments become more complex, we recommend using multiple methods of inventory discovery because single tools can miss certain inventory types.
Ideally, asset management in the cloud can accommodate all types of possible resources, including hosted and ISV instances, while dynamically scanning for new inventory to keep the asset list as current as possible.
Using tags (or labels) in the cloud are also extremely helpful for resource tracking. Something as simple as using tags like “Environment” equals “Dev,” “Test” or “Prod” is a great way to quickly track down resources that may be managed and billed to certain business units, or to review extraneous costs for certain sets of tagged services.
What’s it doing?
Once assets and user profiles are discovered, a baseline compliance framework should be built outlining how users, services and instances are intended to interact with each other.
What's it doing on my network?
Answering this question generally requires tools that are polling and tracking assets to help understand the current state of asset health, what an asset is talking to, what it’s running and who is using it. This is mainly an operational question about how well your assets are being monitored through SNMP, agent-based process discovery, API monitoring, network flow information and log correlation.
In AWS, the baseline compliance framework includes CloudWatch (SNMP monitoring), VPC Flow Logs (NetFlow traffic statistics), CloudTrail (API service access records), CloudWatch Logs (agent-based OS logging) and AWS Config (golden instance service templates).
The main points of focus here include gleaning information about what services are running, what data is being accessed, who is accessing that data and whether assets are being patched properly. Gathering information actively with analysis requires a fine balance between getting detailed information about assets while limiting the impact on deployed resources.
Should it be doing that?
This question is always the toughest to answer. It relies heavily on being able to fully answer the previous two questions — what’s on my network and what’s it doing — as well as having a comprehensive understanding of how systems and procedures shape your network, assets and daily user interactions.
Should it be doing that on my network?
A big assumption related to this question is knowing what normal looks like. We can partially answer this using the baseline framework we used to figure out what assets are doing on the network.
“Normal” is very context sensitive. For instance, normal asset use by a development team may be completely different than normal use by internal HR. Tools that use machine learning and other advanced techniques can offer automated baselining and alerting for anomalous behaviors. For example, AWS Macie is a tool that analyzes traffic and data patterns on AWS S3 to alert if anomalous access requests are being made as an Indicator of Compromise (IoC).
If we have a good handle on what assets exist in the cloud and what they’re doing day in and out, we can then make a more informed decision regarding if they should be doing that action. As cloud networks expand and become more complex, automated solutions are the preferred method to provide an always-on, data-driven approach to compliance and governance.
This question also ties in with corporate governance and compliance. Data and resources should only be accessed in specific ways by specific people or services depending on business requirements while adhering to industry compliance standards like PCI and HIPAA.
In order to maintain and monitor cloud networks, we first need to know what’s out there, build a solid baseline and then monitor against that baseline for anomalous behavior. There are many native and marketplace tools that handle automatic discovery, compliance and remediation for complex cloud initiatives.
WWT can demonstrate cloud asset management in our Advanced Technology Center (ATC) and offers a Cloud Security Workshop that explores asset management alongside many other topics. We can also deliver executive briefings on cloud security and asset management, including data governance and compliance. Reach out to your local WWT representative to learn more and to schedule an ATC demonstration.