In this ATC Insight

Summary

Several of our Global Financial clients have asked us to evaluate CrowdStrike versus several other EDR solutions on the market today. This insight outlines how WWT's ATC Lab Services team tests EDR solutions for clients based on the use cases that are important to each. 

CrowdStrike was tested for three Global Financial clients in a proof of concept (POCs) in the Advanced Technology Center (ATC).
 

ATC Insight

Several of our global fi clients asked us to test out endpoint detection and response (EDR) solutions with them in the ATC. EDR is one of the hottest technology spaces in security. Additionally, there are many competing players in this space that are on the market for client evaluation.

Why do clients rely on WWT, the ATC and our Lab Services team for their security EDR testing efforts?

One of the biggest reasons is because we invested in a Malware Lab to test solutions all the way back in 2018. Since then, our Malware lab environment has matured and features a robust set of testing tools to help clients evaluate security solutions that need to be implemented into production.

ATC Lab Services: Malware Lab

The Malware Lab is a controlled environment that offers secure access to our clients, partners and WWT engineering teams. We are able to evaluate EDR solutions from any vendor using an agent-based tooling approach that is completely vendor neutral. 

In the lab, we can evaluate how well endpoint devices manage and block malicious attacks as well as variants of scripted attacks. The technology behind this type of testing is called Breach and Attack Simulation (BAS).

A methodical approach based on phases

In our Malware Lab, our methodical approach to security testing is predicated on phases. We normally focus first on Scripted and CLI Variant testing, which lets use execute pre-made scripts. This type of testing allows us to understand how such scripts attempt to make various changes to the endpoints when malware attempts to infect or alter machine parameters to enable exploits.

Mandiant's Protected Theater

We also use a product called Protected Theater from our partner Mandiant. This specific tool gives us the ability to safely perform potentially dangerous and destructive tests with real or live malware on our client's' endpoint defenses to truly determine what threats their endpoint controls will and will not block.

The scorecard

Once we start getting measurements and results back on how an EDR solution is responding to our Scripted, CLI Variant and Destructive Live Malware tests, we record this information in a scorecard system. This allows us to measure, weight and visually depict any action taken toward bad acting events.

CrowdStrike Falcon EDR focus features

CrowdStrike has a number of standout features:

  • the ease of deployment
  • the support and recommendations for testing
  • the ability to meet security requirements
  • the endpoint policy configuration
  • the usability of Falcon cloud tenant threat detection
  • the data exportation from the solution
  • the overall options available for CrowdStrike's Falcon Endpoint Protection Platform

Each financial institution had different needs and wants for their EDR solution, and CrowdStrike was able to deliver with the features discussed. 

Similarities in client testing

There were a number of similarities between each client's needs and wants. Our ATC Lab Services team performed a POC for each client to assess the capabilities and performance of the CrowdStrike Falcon EDR solution. Again, the testing occurred in our Malware Lab, located in St. Louis.

We utilized the relevant parts of a seven-step framework, the Lockheed Martin Cyber Kill Chain, for testing the CrowdStrike EDR use cases. The focus of this framework is Advanced Persistent Threat (APT). The seven steps of the Cyber Kill Chain are as follows: (1) Reconnaissance, (2) Weaponization, (3) Delivery, (4) Exploitation, (5) Installation, (6) Command and Control, and (7) Actions on Objectives. 

  • Reconnaissance examples include harvesting information like email addresses and conference information. 
  • Weaponization examples included coupling an exploit with a backdoor into the deliverable payload. 
  • Delivery consists of delivering the weaponized bundle to the victim. 
  • Exploitation consists of using a vulnerability to execute code on a victim's system. Installation is when the malware is installed on the victim's machine or asset. 
  • Command and control (C2) is a term used when the attacker establishes a command channel to remotely manipulate the victim. 
  • Action on objectives is when the attackers accomplish their original goal and have access to the network, assets, etc.

We used Mandiant's Protected Theater and Host Command Line Interface (Host CLI) for all testing. Protected Theater was for the Windows Systems under test. Host CLI was used in the Linux and macOS Systems under test.

Differences in client testing

There were a number of differences between each financial institution's desires for testing, all defined by the work detailed in the phases below. As you can see, client needs for lab testing can be very different. 

The ATC Lab Services team is able to customize testing in the Malware Lab to fit each client's needs. APT Groups, classified by FireEye, are threat actors that receive direction and support from an established nation-state. FireEye also mentions that APT groups try to steal data, disrupt operations or destroy infrastructure. APTs adapt to cyber defenses and frequently re-target the same victim. Both Client 1 and 3 utilized APT malware signatures in their respective testing.

Below is a breakout of how each client test lab was adapted to specific use cases. Expanded detail on each phase of testing can be found within the Test Plan/Test Case section later in this report.

Client 1: Three phases of testing 

Client 1 was looking to replace their current EDR solution:

  1. Client 1, Phase 1: WWT subjected the CrowdStrike security solution to a battery of seven tests to assess efficacy in blocking.
  2. Client 1, Phase 2: WWT subjected the CrowdStrike security solution to a battery of three tests, each based on a scripted attack developed and documented by Client 1's personnel. 
  3. Client 1, Phase 3: Separate testing in the Malware Lab performed by Client 1.

Client 2: Two phases of testing

Client 2 was unique compared to the other customers because they performed AntiVirus (AV) Testing on top of the EDR testing.

  1. Client 2, Phase 1: Block and detection efficacy using endpoint security validation tool (Mandiant Protected Theater).
  2. Client 2, Phase 2: Client 2 chose to utilize a self-evaluation with their own tailored tools and techniques.

Client 3: Three phases of testing

Like Client 1, Client 3 focused exclusively on testing EDR solutions:

  1. Client 3, Phase 1: Automated Efficacy (using Cyber Kill Chain taxonomy).
  2. Client 3, Phase 2 : Automated Efficacy Testing (using APT-focused taxonomy).
  3. Client 3, Phase 3 : Client-led Manual Testing within the Malware Lab in the ATC.

Final impressions and summary

CrowdStrike performed very well in each of the tests administered. Here is a brief overview: 

Support for platforms

Testing found that CrowdStrike Falcon EDR supported a wide variety of system platforms required for testing. These included Windows, macOS and Linux, with the added benefit of supporting older Linux Distributions.

Ease of use

Testing of the CrowdStrike Falcon EDR had no compatibility issues in any WWT lab test implementation. This led to easy deployment and configuration. 

Endpoint policy configuration

A common tenant for EDR capabilities (as well as endpoint policy configuration) contributed to the observable ease of operationalizing the tool in our lab. Falcon EDR's performance was due to multiple features, detailed below.

Features

  • Ease of deployment, which is important when implementing Falcon EDR into client production environments.
  • CrowdStrike Falcon EDR made it simple for each client to meet their security policy requirements.
  • The EDR Cloud Tenant UI is great to navigate and understand.
  • Throughout testing, CrowdStrike provided support help as required; yet we encountered no technical errors.
  • Policy configuration setup was based on Falcon EDR best practices and little to no adjustments were needed. However, in certain cases, WWT testers believe the security policy was overly permissive.
  • The Endpoint Protection Platform prevented most of the attacks it faced.
  • Falcon EDR provided a method for exporting and collecting cloud tenant threat detection and alert events for assessment. This data was collected within the Falcon cloud tenant.

Overall, CrowdStrike Falcon is an excellent EDR solution. WWT looks forward to building and developing the relationship in the future. 

Test plan

Each of the financial institutions testing CrowdStrike EDR tools had different methods of testing. 

Client 1 overview

WWT performed two main phases of testing, with a third performed by Client 1.

Client 1, Phase 1

We subjected each security solution to a battery of seven tests to assess blocking efficacy:

  1. Signature-based malware: Daily list of identified malware samples dropped on disk, without execution (Win/Lin/Mac, Approx. 3000 total samples).
  2. Behavior-based ransomware: Collection of pre-compiled ransomware behavior-based simulations executed using different methods. (Win only, Approx. 560 scenarios).
  3. Behavior-based trojans: Collection of pre-compiled trojan behavior-based simulations executed using different methods (Win only, Approx. 200 scenarios).
  4. Rootkits: Collection of pre-compiled rootkit execution behavior-based simulations (Win only, Approx. 75 scenarios)
  5. Behavior-based worms: Collection of pre-compiled worm behavior-based simulations executed using different methods (Win only, Approx. 40 scenarios).
  6. DLL side loading: Collection of pre-compiled DLL side loading execution behavior-based simulations (Win only, Approx. 20 scenarios).
  7. Client-selected APT attacks: Collection of up to 30 APT groups and malware templates, as selected by Client 1 from a list of available options (Win only, 30 scenarios).

Each security solution experienced the seven tests in two different configurations:

  1. "Offline" Mode (Scenario 1): Endpoints have no outbound Internet access, except for a set of whitelisted destinations that allow for connectivity to WWT cloud-based resources used for conducting the assessment.
  2. Online Mode (Scenario 2): Endpoints have full Internet connectivity.

Client 1, Phase 2:

WWT subjected each security solution to a battery of three tests, each based on a scripted attack developed and documented by client 1's personnel. The tests were limited by the following constraints:

  • Client 1 provided full step-by-step documentation on the steps to stage and launch an attack.
  • Client 1 defined scoring criteria for each test and stipulated artifacts to collect.
  • Tests leveraged existing Malware Lab infrastructure.
  • Tests were conducted on one OS platform per test.

Client 1, Phase 3:

WWT gave Client 1 remote access to the lab environment for a period of four weeks following completion of Phase 1 and Phase 2. Client 1 also had access to the vCenter infrastructure to allow for the taking/restoring of snapshots. 

This phase is set aside to allow Client 1 to perform their own battery of tests and review the functionality of each security solution. WWT ATC personnel were available to provide basic lab support.

_________________________

Client 2 overview

WWT performed two testing phases, with Phase 1 performed by WWT and Phase 2 by the client.

Client 2, Phase 1

This phase assessed block and detection efficacy using Mandiant's Protected Theater. 

All virtual machines in the OEM enclaves traversed a Bluecoat explicit proxy appliance for HTTP/HTTPS connectivity. The WWT team leveraged a combination of VM templates and snapshots to keep all evaluated images consistent and resilient through simulated attacks. 

*WWT also leveraged a security validation tool to take attacker actions on endpoints under test and record observed results. A portion of Phase 1 test cases also include an element labeled "Protected Theater by Mandiant." These are a product of the security validation tool leveraged in the evaluation and signify potentially destructive attacker actions taken on endpoints under test. 

Client 2, Phase 2

The client chose to utilize a self-evaluation with tailored tools and techniques. 

 

Their evaluation included 14 OS platforms:

  1. Windows 7  (Build 7601)
  2. Windows 10 (Build 1703)
  3. Windows 10 (Build 1709)
  4. Windows 10 (Build 1809)
  5. Windows Server 2008 R2 SP1 (Build 7601)
  6. Windows Server 2012 R2 (Build 9600)
  7. Windows Server 2016 (Build 1607)
  8. Windows Server 2019 (Build 1809)
  9. macOS Big Sur
  10. macOS Catalina
  11. Red Hat Enterprise Linux Server release 6.10 (Santiago)
  12. Red Hat Enterprise Linux Server release 7.9 (Maipo)
  13. Oracle Linux Server release 6.10
  14. CentOS Linux release 7.5.1804 (Core)

Testing taxonomy categories for Client 2

Testing categories from Cyber Kill Chain included:

  • T1: Reconnaissance
  • T2: Delivery
  • T3: Exploitation
  • T4: Execution

_________________________

Client 3 overview

WWT performed two phases of testing, with a third performed by Client 3.

Client 3, Phase 1

Tested automated efficacy using Cyber Kill Chain taxonomy and Behavioral Actions.

Client 3, Phase 2

Tested automated efficacy using APT-Focused taxonomy via Protected Theater by Mandiant.

Client 3, Phase 3

This phase entailed client-led manual testing, which was conducted after completion of Phases 1 and 2.

The operating systems assessed for Client 3 included:

  • Windows 10 (Build 1809)
  • Red Hat Enterprise Linux (RHEL 7.9)

Testing taxonomy categories for Client 3

Testing categories from Cyber Kill Chain included:

  • T1: Reconnaissance
  • T1-A: WWT ATC Reconnaissance (Windows) 20 tests
  • T1-B: WWT ATC Reconnaissance (Linux) 2 tests
  • T2: Delivery
  • T2-A: WWT ATC Delivery (Windows) 4 tests
  • T3: Exploitation
  • T3-A: WWT ATC: Exploitation (Windows)  8 tests
  • T4: Execution
  • T4-A: WWT ATC: Execution (Windows) 316 Tests
  • T4-B: WWT ATC: Execution (Linux) 7 tests
  • T5: Command and Control
  • T5-A: WWT ATC: Command and Control (Windows) 6 tests
  • T5-B: WWT ATC: Command and Control (Linux) 1 test
  • T6: Action on Target
  • T6-A: WWT ATC: Action on Target (Windows) 235 tests
  • T6-B: WWT ATC: Action on Target (Linux) 36 tests
  • T7: APT Specific
  • T7-41 APT41 (China) 

FireEye describes APT41 as a prolific cyber threat group that carries out Chinese state-sponsored espionage activity in addition to financially motivated activity potentially outside of state control. 

Deliverables 

The logs and results of all three phases of testing were given to each client via Excel spreadsheets, documentation, Executive Summaries, direct raw results from the solution as well as Statements of Work. Documentation results were used internally by WWT for interpretation and scoring (see sample scoreboard below). The process was conducted for each financial institution.

Scoreboard example with sample data 

Technologies

CrowdStrike Falcon is considered to be one of the top EDR solutions on the market today. All three financial institutions examined CrowdStrike Falcon among other solutions.

  • Client 1: EDR Solution
  • Client 2: EDR Solution and AV Endpoint
  • Client 3: EDR Solution

Technologies