In this article

Listen to this TEC17 podcast and hear WWT experts Rich Harper and James Weiser discuss software-defined and object storage, the value proposition around object storage, and what customers need to consider when addressing storage challenges.

 

Until recently object storage solutions have been associated with two main monikers, "cheap and deep" and "slow data access."

It's not surprising that these characteristics have been used to describe object storage. They are the technology's biggest benefit as well as biggest detractor. Object storage solutions often operate at WAN speeds for data access but are also cost optimized by being built on top of commodity compute platforms that use the largest capacity, lowest $/TB, 7200 RPM spinning SATA drives on the market. 

We continue to see strong interest from our customers in object storage. During the past several years, the number of customers evaluating solutions has doubled year over year. This year alone, we've seen that number quadruple!

Clearly, there are some other factors driving this high interest and adoption of object storage other than cost. 

Factors driving object storage adoption 

1. Software developers. Developers have grown up cutting their teeth on AWS and S3 and are now experienced using the S3 API and it being easier to implement than a standard POSIX file system. Object also allows for the quicker deployment of storage (easier to consume). I argue that just like the adoption of NAS filesystems from block storage over a decade ago, object storage provides that same level of progression from traditional NAS file systems becoming easier and quicker to deploy and use.

2. A Global File System: Object storage gives us the ability to have the same data presented across a wide geography from multiple nodes potentially located at different sites but still presenting a single source of truth.

3. Data Protection Abstraction (RAIN verses RAID protection for resiliency and availability): The protection of data has been abstracted from a single array, hardware RAID (Redundant Array of Independent Disks) approach, to a distributed software Erasure Coded (EC) RAIN (Redundant Array of Independent Nodes) model. Protection is spread across many nodes and potentially even sites to present an "always available" storage cloud that's accessible even after the loss of multiple individual drives, nodes and sites.

4. Higher levels of Protection (EC durability): With EC comes the ability to easily increase the protection levels going from a standard RAID 5 or 6 model with up to two failures allowed in a single RAID group, to one that may have four or more protection fragments, i.e. 10/4 EC (10 data fragments and four parity fragments). 

Solutions typically have background checks that are analyzing the fragment integrity and correcting bit errors if they occur. This results in calculated "durability" (measured as the amount of time and potential for data corruption over time) with crazy numbers like 10 to the 14th power of years between potential data corruption events. This has been one of the main components for using object solutions to replace tape subsystems where data must be able to be recalled no matter what.

5. Rich Meta Data: Object storage gives us the ability to add descriptors to the object to further describe it as to "who, what, where, when and why." It becomes a way to sort and find pertinent data after it's created.

The evolution of object storage performance

As more organizations adopt object because of the above factors, we're now seeing object storage solutions that are starting to infringe on traditional storage use cases, including those that require, in part, a respectable degree of performance. 

Again, this advancement is much like that of the transition from block to file. The first iterations of purpose built NAS arrays were much slower than that generation of block arrays. While there were fundamental differences in how access protocols work with NAS being inherently slower (Random Memory Access on a disk with block vs. a full file transfer of NAS), the "latency" also called "access time," and in the object world, "time to first byte" was slower as well. 

Eventually, NAS solutions were optimized over time in the software and driver stacks so that when the same underlying hardware was used for block and file solutions, the access times were similar. We're seeing the same thing with the development cycle of object solutions. 

Just like the access methods of the block and file world have been based until now off of the SCSI transport protocol which is optimized for spinning disks and are now moving to the NVMe transport protocol to take full advantage of the speed and direct memory access of solid state devices, object storage in the past hasn't been optimized for speed. 

In fact, there has always been an assumption in the software stack that access is based not only on much slower, large capacity, spinning drives, but also potentially operating at WAN latency with an eventually consistent data model. This could be read as, "Object storage solutions weren't designed to be fast because they didn't have to be."

It also means that object storage performance limitations are largely in the software stack and are currently starting to go through an optimization process by the old vanguard of storage OEMs. It should be noted that many of the start-up storage solution providers are building purpose-built solutions that consider performance.

Defining performance

Before we go too much further into performance, we need to get on the same page as to what performance really means. 

Storage performance has three main characteristics: access time, response time and throughput. For the purposes of object, we're going to drill into access time and throughput. 

This is not to say that response time isn't important, but it could be argued that response time is directly related and a result of the other two. Response time is how long it takes to complete the entire operation (e.g. the transfer of an entire object). In the object world this is measured in "transactions per second" or TPS, and much like traditional file solutions, access is performed in a sequential nature with the operation not being complete until the entire file is transferred. 

Throughput

Object storage can achieve much higher throughput (when considering the entire grid solution) than traditional block and file solutions, even the largest and most expensive ones. That is, if you consider throughput as "work performed" once data has been transferred. 

The reason we can say that object is more performant in this regard is because the object storage solutions being discussed here are "grid" solutions that create a global namespace which can be tens to hundreds of nodes in size with thousands of disks, all working in parallel to serve thousands of different data streams (also called "concurrency"). So while a SAN array sitting on a couple of floor tiles can serve several GB/s of through-put very quickly (with low latency), an object storage GRID can serve hundreds of GB/s over multiple floor tiles across multiple data centers albeit not with anywhere close to the low latency of a "big-iron" block storage array. 

Access time

Access time (or time to first byte) for a block or file array (according to the media type) is typically single to sub-millisecond. Access time for an Object array, even with no latency being introduced by say a multi-site WAN deployment is tens of milliseconds (for arguments sake let's say 25-30ms being typical). Those response times aren't impacted as much by the media types as they are the software stack and the network connecting the nodes together.

The application of higher performing object storage solutions

Data center architects are starting to see the need for an object storage solution that is simple to provision, globally distributed and highly distributed, but also performant

This need is being driven mainly from the desire to do either high-performance ingest and streaming data transformation/analytics and/or running analytics in place in the cases of Splunk and Hadoop. 

As the data sets get larger and larger with more and more disparate data being pulled into the data set, the movement of data to a stage where data can be analyzed is becoming a bottleneck in the time it takes to post analytical results. This can be minimized by creating a "data swamp" that can be filtered to create a data lake on the centralized data repository. 

So what can you expect from today's performant object solutions? 

Note: for the purposes of this article we are describing the work being done by traditional storage OEMs in that they have fully baked products with all the elements required for the enterprise data center. That isn't to say that there isn't some interesting newer products from smaller OEMs coming onto the market that are custom built to have a higher performance profile.

The first iterations of the quest for mainstream object storage solutions to improve performance included moving the internal object file-system or directory look-up and corresponding meta-data (often time running in a Casandra database) to SSDs. In some solutions where the front-end ingest is separated from the bac-end bulk capacity, the front-end nodes are being hosted with SSDs as well. 

Be advised that there are still a lot of variables to consider and whenever asked a performance question, the pat answer is to reply "it depends" (which it does). Let's just say though that, with a single grid with no WAN latency running with all-flash hardware, it is now becoming feasible to see response times around 10ms, which is a significant improvement! 

This year at NetApp Insight,  NetApp introduced an all-flash storage appliance to its StorageGRID line . More significantly, there was the new release of StorageGIRD code (11.3) which is being optimized to help take advantage of the faster media. 

NetApp openly admits that there are additional performance increases that can be made in the software stack and that they are looking to implement those improvements in future releases. So while we don't expect to see those those sub millisecond response times anytime soon, we do expect to see a steady improvement in object storage access times. 

We will be testing the NetApp all-flash object platform soon in the WWT Advanced Technology Center to measure exactly what levels of performance we can achieve. 

For more information on this, other object storage solutions or any of your data center needs, connect with me or reach out to your WWT account team.