Hitachi Virtual Storage Platform (VSP) 5000 series is the culmination of over half a century of innovation and experience in the IT sector. No other vendor is as committed as Hitachi to helping you and your customers ingest more data from any workload and create strategies to monetize that data. Hitachi knows more than ever that IT departments are tasked with accelerating business innovation. Providing stakeholders with competitive advantages, and data insights to make strategic business decisions, is becoming more of a high priority. To improve your productivity, manage risk and drive down costs the Hitachi VSP 5000 enterprise storage array is built to handle a variety of workloads within a single architecture. All the same features found in the previous generation of the enterprise portfolio have been carried over and improved upon due to the additions nonvolatile memory express (NVMe) technology, and Hitachi Accelerated Fabric. While these two innovations are the top two of the many additional features within the Hitachi VSP 5000 series, they are also the heart of a future proof storage architecture that will allow your business to achieve digital transformation with ease and consolidation. IoT data streams, video analytics platforms and feeds, as well as structured and unstructured data platforms, generate massive volumes of data. The data generated by these platforms must be guaranteed available, always reliable, and agile enough to handle a shift in strategic direction or business need. The Hitachi VSP 5000 Series is a system ready to handle the workloads of today and tomorrow. Using efficient storage technologies, Colder datasets can be automatically tiered to hard disk drives (HDD) or migrated to the cloud or virtualized storage. Duplicate data can be identified and deduplicated either in-line or at rest along with encryption. It is the first storage in the industry to offer a mixed NVMe solid-state disk (SSD), serial-attached SCSI (SAS) SSD and HDD environment that can scale up in capacity but also scale out for performance. This approach gives you the composable data platform for all your workloads.
Take advantage of the advanced capabilities in the VSP 5000 series in all of your existing data center storage assets through virtualization, which Hitachi pioneered. Storage virtualization gives you a single management control point for multiple storage systems, which increases administrative efficiencies. All data services, such as data reduction, automation and metro-clustering, which are available with the VSP 5000 series, are extended to virtualized systems to give them more value and an extended life cycle.
There are four models in the VSP 5000 series: The all-flash VSP 5100 is a scale-up enterprise storage platform with a dual-controller block supporting open and mainframe workloads. You then have a nondisruptive upgrade path to the all-flash VSP 5500, which starts with a single quad controller block and scales out to three blocks as you grow and unifed by the Hitachi Accelerated Fabric. The Hitachi Accelerated Fabric is a PCI-e switching technology specifically build for the Hitachi VSP 5000 series and beyond. Both models are also available in hybrid array models: VSP 5100H and VSP 5500H. The VSP 5000 series starts as small as 3.8TB and scales up to 69PB of raw capacity and 21 million IOPS of performance, which allows for massive consolidation of workloads for cost savings and data optimization. With response times as low as 70 microseconds, you know have a storage platform which can meet your and exceed your business partners asks demands and needs. Our patented Hitachi Accelerated Fabric allows Hitachi Storage Virtualization Operating System RF (SVOS RF) to offload I/O traffic between blocks. It uses an architecture that provides immediate processing power without wait time or interruption to maximize I/O throughput. As a result, your applications suffer no latency increases since access to data is accelerated between nodes even when you scale your system out.
You can place your business data within our Hitachi VSP 5000 Series, relying on 57 years of Hitachi engineering experience to deliver reliability, accessibility and serviceability. The VSP 5000 series builds on that experience, offering a superior range of continuity options, all backed up with the industry’s first and most comprehensive 100% data availability guarantee. Migrate data from older systems non-disruptively so operations can continue, nonstop. Hitachi’s scale-out architecture protects against local faults and performance issues with our active-active controller architecture. With global-active device we enable full active/active metro-clustering between data centers that can be up to 500km apart. Replicate to a third data center using Hitachi Universal Replicator software, which offers bidirectional replication, to make efficient use of all your investments. Your system can be monitored in the cloud via Hitachi Remote Ops to proactively predict and prevent downtime. With the VSP 5000 series, you gain rocksteady hardware, but what about your application’s continuity and recovery? This series is supported by Hitachi Op Center Protector, which provides application-aware snapshots, copy data management and instant recovery. You can recover from a data disaster in seconds, not hours! Security compliance is essential, and in the Hitachi VSP 5000 series we have taken steps to improve the security of how data is stored and administrated. We have greatly reduced the risk of data falling into unauthorized possession with FIPS 140-2 encryption on our media. The erasure services align with NIST SP 800-88r2 and ISO/IEC 27040:2014. Finally, we have hardened system access to safeguard against illegal access and hacking: The VSP 5000 series uses TLS1.3 for secure communications to stop improper access by other systems on the fabric.
Simplifying the management, provisioning and performance of data platforms can become a demanding never-ending cycle. This is the potential of AI operations, where the VSP 5000 series can take control of repetitive tasks to reduce and even eliminate the need for any human intervention. Your staff is freed to focus on innovation and tactical business efforts. AI is used to constantly monitor the environment and make sure that resources are performing, based on service level agreements (SLAs). If issues are noted, the AI can predict and prescribe changes to improve the operational efficiency. AI can also be used to simplify complex decision-making, such as predicting when additional storage might be needed or how quality of service (QoS) should be configured. Automation is a critical aspect of AI operations. Automation software handles configuration, provisioning and common management tasks instead of humans. Automation is often leveraged at the start of a deployment to ensure resources are set up based on best practices and no steps are missed that could result in data loss. It can also be used in concert with AI to automate infrastructure updates.