It wasn’t long ago we published an article about some new HCI hardware in the ATC. As you might know, WWT and Cisco have a pretty big partnership, but it’s not something we take lightly. We have a long tradition of working hard to show each other value and keep growing for total world domination... or something.
Let’s be honest here, in a world of IOPs, one cannot dominate without NVMe, and we are excited to now have an all NVMe HyperFlex system in the ATC.
Cisco HyperFlex All NVMe
A few months ago, Cisco released support for an all NVMe systems as part of their 4.0 code release. As techies, of course we were all salivating over this and just had to get our hands on one. It finally happened, and it’s ours to keep!
An aspect of the HyperFlex solution that gives it an edge over other solutions out in the market that are working on, or have introduced, all NVMe support is the fabric interconnects (FIs). Historically, we've plugged our HCI systems into 10G switching, which has worked out great. With an all NVMe system, the network must be part of the conversation. With the inclusion of the FIs as part of the solution, Cisco has essentially solved the need for a high-speed Ethernet storage back-bone without an entire network refresh.
As HCI further matures, new use cases come up that we had never thought about putting on HCI — your typical VDI, ROBO and general virtualized workloads are the most common. Large or high performance databases, however, are a little more unusual.
Will the introduction of NVMe to HCI start flipping that on its head? In the coming weeks, we’ll be working on releasing some independent performance testing with database type workloads. With a large array of HyperFlex systems in the ATC, our vision is to test the same workloads with hybrid, all-flash and all NVMe systems.
Acceleration engine card
Not only did we get the all NVMe nodes but in case you missed it, Cisco also released the first version of their compression offload card, which is essentially an FPGA.
I can’t divulge everything we’ve seen from Cisco on this, but the roadmap is pretty exciting. These NVMe nodes have the offload card, and you can bet that we’ll be testing that thing out. Our initial assumption is that the card is a no-brainer to include, but testing should put that to rest.
We also can't overlook an important management aspect of Cisco's solution: Intersight. The SaaS-based management tool has come a long way since its inception and only continues to improve. A Cisco Intersight workshop is currently under development, but we plan on having these new HyperFlex NVMe systems fully deployed and managed through Intersight.
A follow-up article on the greater Intersight model will be published, and we'll make sure we highlight our experiences and caveats we may have run into with the HyperFlex system. Of course, all of this wouldn't be possible without our Cisco team. A thank you is owed to them for the continued investment in our partnership.
More on our testing plans
We’ve got ideas in mind with how to test these out, but we’d love to get your input as well. If you have ideas and/or and want to help, reach out! The Global Engineering Team (GET) and ATC team would love some help. There are only 23 work hours in a day, and even our ATC superheros have to sleep at some point!