In this blog

Overview 

I've dedicated a lot of page print here on WWT's platform discussing how WWT performs object storage performance testing, covering our processes, methodology and how to interpret results; example, here .   

Now, we are going to focus on the feature and functionality side of the equation, and how WWT performs both object storage feature and functionality testing and our customized Kepner-Tregoe analysis methodology to help our customers make good, non-biased decisions based on which object storage products will work best for them. This is exclusively for testing "private cloud" (object storage in your own data center space), and not intended for guidance regarding selection of public storage service providers. 

Step one: What's the "lay of the land?"

Generally speaking, our first meeting with a customer typically is used to discuss their environment, what solutions do they have, what is and what isn't working well and what functionality state they are trying to get achieve. In short, we seek to uncover what problems they are trying to solve within their existing storage environment. 

It's not uncommon for us to also go over the current object storage landscape, provide WWT's viewpoint on which OEMs are leading the pack, and why, and discuss any new products worth keeping an eye on. Once all are on the same page regarding issues, goals and potential solutions, the next step is to narrow the field.

Step Two: Narrowing the field

Customers do not always engage us at the same point in their unique decision-making process when deciding which products to evaluate and ultimately buy. In some cases, they aren't even aware of the different choices and want to know more about which OEMs to consider. In other cases, they may have already been doing some academic research and know that there are several products that seem viable. At other times, they have already decided between two or three OEMs and are simply ready to jump right into testing. Some already have a good idea of what they want to perform as a test plan, other's would like help in how and what to test. It all varies!

As the name of this section implies, if the customer hasn't already narrowed the choices down to two or three products, then that's our goal. We get there by narrowing them down with a deep-dive discussion on each product's pros and cons. But when a customer comes to us with more than a couple of solutions in mind, we can jump right in with our Kepner-Tregoe Analysis tool.

What's a Kepner-Tregoe?

It's actually a whom and not a what. Based on the work of Charles Kepner and Benjamin Tregoe in the 1950's, Kepner-Tregoe is a method for reaching a conclusion by gathering, organizing and then evaluating key decision-making information. They both where consultants who studied how managers and engineers solved problems and the two of them came up with this repeatable method to eliminate bias when problem-solving. 

WWT has created a highly customized object storage Kepner-Tregoe analysis spreadsheet, which, when properly filled out, will clarify which solution(s) will be best for your particular needs. 

The tool can be used in a variety of ways without any strict requirements. For instance, you may want to use it to help narrow product choices for initial consideration, or, you might not start the analysis process until after you've narrowed it down to select OEMs/products and are ready to perform proof of concept testing on features and functionality.

More on WWT's object storage Kepner-Tregoe analysis tool

Since there may be a desire to start engaging with the KT analysis tool early in the process, let's talk about what this tool looks like and how it's used. We will continue with the actual feature and functionality testing process a bit further down, and feel free to jump down, but this analysis tool is something we recommend you use, if for no other reason than to create a "testing scorecard," so you just as well read about it now. Besides, the tool will create some great artifacts that you can use to justify your decisions, both on which products you wish to test and which product is right for your organization.

Figure 1. below shows an example of a high-level portion of the analysis tool, We created this version to be able to demonstrate how to use the tool and it's for a fictional customer company called "ACME Coyote Supply" and we have pretended that WWT has already had an object storage workshop with them and together we have completed this analysis tool. 

Part of that process will be to define ACS's (ACME Coyote Supply) objectives, as well as hone in on their "care abouts" (selection criteria) for the product they need to deploy. 

Fig 1.: High Level View of a Portion of the Kepner-Tregoe Analysis Spreadsheet

WWT has created a set of pre-defined requirements that cover the majority of possible selection criteria based off our experiences across different business types, including very large commercial enterprises, service providers and the Federal Government. Given that, chances are that some of the criteria listed in the starting draft version of the tool won't interest you much, which is fine! 

During a mini-workshop, we will go over the tool and explain how it works, then go through each of the selection criteria "end-points" to determine if it's something your organization cares about. If not, we simply remove it. If it's something you are curious about but not really anything you want to use as an actual selection criteria, we can leave it in but assign it a "weight of 0," which means it won't be part of the various OEMs scoring outcome. 

Figure 2. below shows the first sub-header category of "deployment model" and several potential selection criteria. 

Figure 2. Selection criteria listed for sub-header of deployment.

There are several other categories besides "deployment model" shown here. As it stands now, there are 9 different categories including:

  • Deployment model
  • Data protection and redundancy
  • Security
  • Performance
  • Scalability
  • Data management features and policy
  • OEM product support
  • API protocol support
  • Total cost of ownership

Each category has several selection criteria end points that can be graded/scored for each OEM. "Deployment model" above is actually one of the smaller categories. "Security," for example, is one of the larger ones. See the "Security" selection criteria in Figure 3. below.

Figure 3. Security Selection criteria.

Some things "security related" may fall under multiple categories, and while not listed above, may be listed under other categories. For instance, you may want to call out immutable data or compliance capabilities, or even versioning as something that could loosely be considered as "security." Don't worry, they are covered elsewhere in the tool. 

If you do run across a selection criteria that isn't in the tool, it's very easy to add it as well as delete any selection criteria rows that are not applicable. As we go through these criteria during the workshop (there are roughly 100 different selection criteria) the customer will assign a "weight" for the importance of each. You can see an example of the customer supplied weights in the "green" column above in Figure 3. For instance the first criteria listed under the security category is "AD/LDAP Integration." That's very important to this customer and they gave it a weight of "10." The third row down is "dual factor authentication support for user access," which was considered as a potential, "nice to have for future capabilities," but not really important now and so was given a weight of "3." On down toward the last row of this category is the criteria of being able to do secure DELETEs or Wipes (often a federal requirement), and while this isn't a requirement for ACME, they were interested in knowing the answers, so it was kept in the tool but given a weight of "0," which won't effect the scoring of any OEM who may or may not support that functionality. We'll show how those weights work next! 

Kepner-Tregoe analysis tool, OEM scoring, RAW and weighted outcomes (graphs)

Now that all the selection criteria have been defined and they all have been given the appropriate weights on how important that functionality is to the customer's business decision, we move on to the actual product scoring. As mentioned earlier, this could be simply part of the academic exercise and based on meetings with WWT's architect and/or deep dive meetings with the OEMs, or this could be the output of actual OEM feature and functionality testing being performed on the product within WWT's ATC (Advanced Technology Center). Either way, once the products are scored the results look the same.

In figure 4. below is an example of the analysis spreadsheet showing the first two selection criteria categories and two (of the four) OEMs being evaluated, strategically named: OEM1 and OEM2 (for demo purposes).

Figure 4. Kepner Tregoe Analysis scoring and weighting.

Once the four OEMs were each scored (this can easily take several hours per OEM), with the scores entered into the tool, there are two tabs of the spreadsheet that will visually enable you to tell how each OEM scored per category, as well as a total over-all average scoring result (see Figure 5.). The first tab is the "Decision Analysis Worksheet," which is the one we have been working with, so far, to verify the criteria, the criteria weights and the OEM scores.

Figure 5. Tabs of the Analysis Tool

  

 

 

If we select the second tab, "Raw Scoring Graph" after completing all of the OEM scoring, we get a nice graphical view of how the four OEMs compared to each other on the raw scoring (non-weighted) of each of the categories and selection criteria. There are two graphs that are generated to make more sense of the data. This first is a "snowflake" or "spider" graph that shows how each OEM compares to each other in the raw scoring (see Figure 6.).

Figure 6. Raw scoring "spider" graph showing how the four OEMs compared in pure raw scoring.

The second graph on this page is a more traditional bar graph that also (top row) shows the average raw scoring for the four OEMs across all the categories, as well as, display of how each OEM did per category (see Figure 7.). From a raw scoring average score, OEM1 performed the best.

Figure 7. Raw scoring bar graphs including a total average score.

Just because OEM1 performed the best for the raw scoring doesn't necessarily mean it is the best choice. The next tab of the spreadsheet, "Weighted Scoring Graph," will also take into account your selection criteria weighting. The graph views are very similar, so I'll only show the Weighted Scoring Bar Graph to see a comparison. See figure 8.

Figure 8. Weighted scoring bar graph.

In this case, the weighted results pretty much match the raw. I loosely used real OEMs when I created this demo analysis. I probably should have fudged the pretend data a bit to make the weighted order come out differently than the raw. The point is that it could be different, and it is all based on what is important to your organization. 

It's important to note that the total average scoring at the end of the day may be a little misleading because the nine sections may not carry an equal percentage of how you want to make your decision. For instance, the "security" criteria may be much much more important to you than the "deployment" criteria. Or, the TCO (cost) may ultimately dictate which product you select if the other criteria scoring are close enough together that you are convinced the product will fulfill your mission. Regardless, at the end of the day, the process will allow you to make a better decision and capture the output to easily justify that decision.

Step three: Creating a test plan

Test plans can be very detailed and take months to write, let alone execute. For our testing, we have simplified this approach by creating a "test plan template," and not an actual detailed test plan. For instance, instead of listing a step-by-step description of how to create a user and validate access, we simply add a sub-section called, "Create a User." 

During WWT's testing process, we work with you to capture your organization's care-abouts and what you would like to see and validate during the testing process. We already have a couple of pretty detailed templates created so it's usually just a brief exercises to go through an existing template and modify it to better fit your organization's needs. 

The template is broken up into five major sections and roughly includes all the details for the features and functionalities that are used to fill out the Kepner Tregoe analysis worksheet. Those five sections include:

  • Overview
  • Setup
  • Functionality
  • Operations and maintenance
  • Resiliency testing

Figure 9. below shows an excerpt of a real test plan template for one of WWT's recent customer POCs. This is just a couple of pages to give example of what is in the template format; this particular test plan was ten pages long and also covered the selection criteria created in the Kepner Tregoe analysis.

Figure 9. Excerpt of WWTs object storage feature and functionality test plan.

This test plan also incorporates other onsite testing, such as generating a test load on an actual deployment in the test lab and then performing various resiliency actions like pulling drives, taking nodes offline, downing sites, or, even in this case, downing an entire emulated region. 

Another important difference on how we perform this functionality testing is that we let each OEM actually perform the plan while you watch and grade the results! 

It works like this… After working together with WWT to create the testing template, you get to relax and let WWT work with the OEMs to do all the heaving lifting to prepare for the actual testing. We validate how and what you want to test based on our earlier workshops, together creating the test plan. We then work with the OEMs to bring in the equipment and configure it based on the level of testing you need to feel comfortable in making your final decision. This might be multiple emulated sites and/or even regions, or it might include WAN emulation to replicate the exact WAN latency between your organizational sites, or be one site that is a fair representation of what you actually want to deploy for performance and capacity to also do performance testing. It's not abnormal for these POCs to be made up of millions of dollars of gear. 

Once WWT has the test kit installed and validated by the OEMs, we work with the OEMs to go through and fully understand and even practice performing the test plan with them. We do this for a couple of important reasons: (1) We want each OEM to perform the same consistent test plan in the same way and order; (2) When the OEM actually performs the test plan with us "for real" (on what I like to call "game day"), we try to fit all the testing for a single OEM into a single day! 

Normally how it works is that we all meet at our headquarters in STL and then we spend all day with OEM1 executing (and later scoring) the test plan. On the next day, we would repeat the process with OEM2 until complete. In that way, in the period of a week you would have enough information to make an educated decision on which product to move forward with. 

Doing it this way greatly accelerates the length of time it takes to evaluate, test and select a new solution. I've had customers (some of them Fortune 100 companies) tell me WWT was able to accomplish POC activity in a couple of months, when they would not have been able to do it on their own within an a year, if at all. I've also had a Fortune 5 lead architect tell me that it was the best lab environment that they have ever worked in.

For more detailed information or to learn how to work with us to perform your next storage evaluation, feel free contact us here or go through your local WWT sales account team.

Technologies