The critical latency that no one's talking about but should be
While everyone is focused on bits-and-bytes latency, a focus on latency where speed is glacial and minutes equal millions of lost revenue — consider operational latency.
Operational latency is a byproduct of many legacy solutions still relying on excessive manual processes. However, for the purposes of this discussion, we want to focus on the operational latency associated with legacy data management
Legacy data management solutions have been around since before Y2K. This means that many of the manual processes they mandated have since been eclipsed by next-generation technologies, such as policy-driven service level agreements (SLAs) for set-it-and-forget-it lifecycle data management, orchestration, self-service friendly APIs, global search across hybrid cloud deployments, artificial intelligence-optimized backup scheduling, and machined learned insights for easy recoveries. Sounds like a lot of advancements, right?
In recovery, you have two scenarios: the small ‘whoops’ or the potentially catastrophic disaster. The whoops are minor, contained incidents, like when a user loses a single file or deletes the wrong thing, or overwrites a document with another version. Little whoops happen all the time. That's when it’s important to have operational self-service recovery tools that are enabled by Google-like file search capabilities. The combination of these two capabilities alone can squeeze hours and sometimes days of reduced productivity out of the budget.
Then there are the potentially catastrophic disasters, which is when the entire data center is non-functional, and you need to get it back up and running immediately. Imagine you've built a high-performance production environment and, unfortunately, ransomware has corrupted all those business-critical files. Consider the ramifications of the time it takes to discover the scope of the breach, and then factor in the recovery time associated with legacy tools that require rehydration from tape across the network. The operational inefficiencies are staggering and lead to significant cost and increased duration of the outage:
- 44 percent of enterprises claim that their average cost per hour of server downtime ranges from $1 million to more than $5 million.
- Average downtime from a ransomware attack has risen to 16.2 days — driven by the increased number of successful attacks against larger enterprises with complex networks.
- Business interruption costs, such as lost revenue and long-term brand damage, are 5-10x higher than direct ransomware costs.
If you aren’t convinced that operational latency is critical, just look at the city of New Orleans, and the city of Durham, North Carolina, as proof. Both cities fell victim to the Russian ransomware known as Ryuk. The city of New Orleans was forced to declare a state of emergency. Months later, New Orleans was still affected, with multiple departments being forced to conduct operations on paper.
Durham had recently upgraded to a next-generation data management solution. The recovery latency was low. Their new solution detected the ransomware attack and used machine learning (ML) to underscore the infected files and highlight the last known good files. The city was back online quickly and able to easily recover the most recent data from immutable backups they had in place. Their downtime was limited — no state of emergency needed to be declared.
The combined team of Rubrik and NetApp
Last year, data storage leader NetApp announced it was working in partnership with the cloud data management leader Rubrik to make sure their joint customers would experience the best of both worlds — lower latency from IT bits-and-bytes as well as significantly improved operational latency. When you bring these two solutions providers together, both focused on latency reduction, you’ve got an entire data management platform that ensures low latency in every situation.
And or even further alignment, Rubrik’s Cloud Data Management platform protects, automates, and secures applications across data centers and clouds. Users can say goodbye to manual backup when they use Rubrik’s AI-enabled SLA policy engine for automated backup, recovery, and archival to NetApp’s StorageGrid or public cloud.
NetApp’s StorageGRID is a software-defined object storage solution that supports industry-standard object APIs. With StorageGRID’s ILM policy engine, users can create multiple service levels with metadata-driven object lifecycle policies, optimizing performance, cost, and location.
With Rubrik and NetApp StorageGRID, users can automate data lifecycle management while leveraging StorageGRID as a cloud-scale, object-based, archive target. Both Rubrik and the StorageGRID index file metadata to enable global file-level search for instant access to massive amounts of unstructured data. Users can quickly locate and restore VMs, databases, files, and more across public or private clouds.
Rubrik and NetApp are the perfect pair for data management, protection, and low latency. Here are three key reasons why:
- Integration. Rubrik deeply integrates with NetApp to increase data protection and speed-to-data-recovery. Their joint integration of the SNAPDIFF API reduces how long it takes to determine which files have changed on the NetApp NAS. When Rubrik goes to back up the billions of unstructured data files on the NetApp NAS, NetApp hands them the decoder ring to the exact files that have changed since the last backup.
- StorageGRID integration. Rubrik has supported NetApp StorageGRID for quite a while, but we've done some additional testing to make sure it's a solid archive platform for Rubrik’s solution. StorageGRID is a high-performance, low-cost way to store petabytes of strategic data on-premises. How does that help latency? In addition to having high throughput links to StorageGRID, retaining the data locally eliminates the transfer hop from the cloud.
- Vision. Both NetApp and Rubrik are focused on enabling their customers to get data into the cloud in a way that makes sense without compromising choice. Their joint solutions work together to ensure precise recovery, reduced egress charges, and end-to-end encryption. Over a longer period, their joint vision will also help accelerate a customer’s journey to the cloud, which is another kind of latency reduction!
Rubrik and NetApp work together to lower latency in every situation.
Take an idea out for a test-drive via the WWT Advanced Technology Center
WWT helps customers that are struggling with key business challenges such as high latency by offering a variety of services through our Advanced Technology Center (ATC), which lets our customers test-drive potential solutions. The robust technology testing environment helps customers zero in on end-to-end, multi-vendor solutions that meet business outcomes.
For example, WWT specialists can implement Rubrik’s Cloud Data Management platform with NetApp’s StorageGRID and prove it can exceed customers’ business SLAs. Whether our customer wants to test it for 1,000 users or 10,000 users, we can scale it out inside the ATC. When our customers complete a test plan in the ATC, they go forward to deployment with the confidence of knowing they have a proven solution.
Create an account on wwt.com to access interactive resources to help you digitally transform your data and improve all aspects of data management, from security to latency.