What you need to know about Cisco’s new 32G FC on MDS 9700 and UCS C-Series and why it’s a big deal
Cisco has announced their new, high-performance 32G Fibre Channel Module on MDS 9700 Directors and 32G Host Bus Adapter for UCS C-Series.
So why get excited? After all, we already have 40G Ethernet and even 100G Ethernet primarily for ISLs (Inter-System Links), so doesn’t it seem like 32G FC should have already happened?
Well yes and no, because that’s really like comparing apples and oranges. Think about the fact that the bandwidth has doubled over a period of about five years and is 32 times more than it was a dozen years ago. Back then I was impressed when 1G GLMs (Gigabaud Link Modules) were introduced to connect FC cables instead of having to wrangle 50 pounds of parallel SCSI cables to connect to a circa 1997 SYM4 disk array (Ah, the good old days when life was simpler…not!). But I digress.
For about as long as I’ve been working with FC, people have been telling me FC was the same as dead and that it would soon be replaced with Ethernet. Of course, those were mainly Ethernet guys who thought a network meant just connecting everything up to everything else and let the spanning tree figure it out (Dang. I’m still digressing!).
Anyway, the reality is that there still isn’t a better low latency way to deliver ordered packets (the ordered part is very important in storage networking) over campus distances than FC. And when it comes to storage, a couple of milliseconds of latency has always mattered, but in today’s storage world microseconds count!
My non-storage brethren would probably nod their heads and say, “Yeah I can see why 32G would be much faster than….” When actually, 32G vs. 16G really doesn’t have anything to do with the delivery speed, but rather how much work can be done over the same period of time. I haven’t mentioned the other exciting part yet – the part that affects the speed. Just know that FC networking by architectural design is still faster and preferable for low latency storage applications as the storage protocol transport mechanism over traditional Ethernet networks.
Think of the difference between 16G and 32G like this: you have a 1/2″ hose to fill the kids’ wading pool in the yard. Filling it inside and dragging it out of the house into the yard doesn’t work nearly as well and isn’t nearly as easy. Oh wait! That’s another good storage analogy describing why we made the move from those heavy parallel SCSI cables I mentioned earlier. Using those you only had about 25 feet that you could go between your servers and the storage. Meaning your marvelous storage array was really close to your directly connected compute. Actually, in the same immediate area and there wasn’t any way you were taking a “data hose” outside! But back to modern times.
You have this 1/2″ hose (16G) to fill the kiddie pool and it’s going to take about a half hour to fill. But if we had a hose that was double that size (32G) and had the same pressure (We can’t ignore the laws of fluid dynamics, now can we?!) then it would fill the pool in half the time! Ah you say…see it’s faster! Well not really. Granted, it completed the job more quickly but it still took the same amount of time to get that first drip of water into the pool no matter the size of the hose. And the amount of time it took between turning on the hose and getting the first drip in the pool is what in storage networking terms is called latency. The latency wasn’t affected by simply doubling the transport size.
What you may not be aware of is that while FC is the transport mechanism, the commands and data riding on that FC transport are still being done via the SCSI protocol (FCP). Not that there’s anything wrong with the SCSI protocol and FCP, but it was designed with the assumption that the data would be stored on spinning disks. That has been true–until now. And with flash storage and solid state devices, translating SCSI protocol to talk to a memory device means wasted steps and overhead. And if there’s one thing we can all agree on, it’s that overhead is bad. If you get rid of the overhead and wasted steps then things move faster. But you can’t modify the SCSI protocol because there are about a gazillion spinning disk drives out there.
So enter the NVMe (non-volatile memory express) protocol. It’s a protocol standard that has been written specifically to talk to solid state memory devices very quickly and with a lower latency than SCSI. So back to the hose and the kiddie pool, using the special NVMe FC hose, the pool starts filling faster than it did using the standard FCP hose. This has the effect of increasing the pressure, while using the double sized (32G) hose to get more water (data bytes) through and fill the pool (data file or record). And as a result, the NVMe protocol is faster between the spigot (server) and the pool (storage).
With the new Cisco 32G blade for the MDS 9000 series you can transport either SCSI or NVMe. You don’t have to try to keep track of NVMe enabled FC ports or do anything else special as a storage administrator. As long as the source and destination devices understand NVMe, then that is the protocol that it will connect with for the substantially lower latency storage interconnect.
One article I read compared doing the software upgrade on the host bus adaptors (HBAs) to enable NVMe to going into “ludicrous mode,” which I liked as a nod to both Tesla and Spaceballs the movie.
So let’s end with the same question we started with and that is, “Why is this Cisco 32G release a big deal?”
Well with the prolific adoption of flash storage devices in the data center and the amount of data bandwidth they can drive, the network was starting to become the bottleneck in the overall architecture. By optimizing the transport mechanism, even more work can be done quicker. In my opinion, the impact of doubling the pipe size and supporting NVMe over FC (not to mention new monitoring capabilities) is a game changer with significant impact in the storage networking realm.
Want to learn more about high performance, low latency storage options? Contact us to not only learn about solutions to help meet your needs, but actually kick the tires and ensure that a solution or architectural idea will work and perform in your environment before making the big investment.