The State of NVMe-oF in 2026
In this blog
I've been following NVMe-over-fabrics (NVMe-oF) for the past seven or eight years. Back then, NVMe-FC was the hot new thing in high-performance shared block storage. The path to achieving an all-NVMe future felt within reach as the storage vendors rolled out their subway maps showing how they had achieved end-to-end NVMe and didn't have to stop at the SCSI translation station anymore. Imagine being able to ditch, what was the USB of its day, for something explicitly designed for storage!* Perhaps (definitely) optimistically, I believed there'd be a wholesale migration of everything to the new protocol over the course of a refresh cycle. Well, that hasn't really happened. Where have we ended up instead?
A brief history of NVMe-over-fabrics
First, some history. 2016 saw the finalized spec for NVMe over fabrics, which initially included Fibre Channel and RDMA transports like InfiniBand and RoCE. That spec was added to in early 2021, when NVMe-over-TCP was ratified and the storage OEMs started their work to bring the protocol to the masses. In early 2022, GA code releases from the array manufacturers hit the streets with support for the NVMe over TCP.
I started my exploration of NVMe first with NVMe over TCP as part of a code beta in late 2021 and wrote about it several times since. From there, I got curious about actually testing it alongside NVMe over Fibre Channel and used our ATC to explore its capabilities. Nearly three years have passed since I last did a hot lap with NVMe/TCP at 25Gb/s and NVMe/FC at 32GB/s, but my excitement for it hasn't faded. Just after I published my first article on NVMe over TCP, which was pretty basic, I started getting pinged by customers. Since then, I've spoken directly to dozens of customers about it and hundreds of folks have read my blog posts. In my humility, I will not say I am an expert, but I do have opinions informed by where I sit in the storage world.
What our customers think
Let's talk about the spectrum of customer attitudes I've seen, as it relates to NVMe-oF.
- Wait and see: Some folks feel burned by FCoE which, for a hot minute, was the future of data center fabrics. Certainly, FCoE isn't dead; it lives on in Cisco UCS but I'm not aware of any storage systems shipping with FCoE frontend ports anymore. These customers express general interest but are waiting to see how it plays out.
- Ethernet is not to be trusted with storage traffic: Looking at IP fabrics, they ask, "Why would I ever replace my resilient and trustworthy Fibre Channel with…that?!" These customers have decent-sized FC fabrics today, and the thought that they'd ever replace that with Ethernet is a nonstarter. To them, I say, "That's okay." FC has a lot of burn-in time, performs well and is very well supported across every operating system. Also, it usually puts the fabric in the hands of the storage team, a capability storage teams appreciate. It doesn't have to be an all-or-nothing strategy; IP SAN is an additional tool in the toolbox to use where it makes the most sense; where you land on the spectrum between all-FC and all-IP is up to you. By the way, NVMe over Fibre Channel is real and does offer some benefits if you migrate.
- Let's do it! These customers are already using operating systems on the support matrix, like a fully virtualized server infrastructure, so it's simply a non-disruptive storage move to get there. They tend to be facing a significant refresh and re-architecture, allowing essentially a greenfield buildout, which is a best-case scenario.
Everyone is cool with NVMe being the right storage protocol going forward. The bulk of the customers perceive NVMe over TCP will be the protocol for IP-based NVMe; I agree with this stance. Like iSCSI, it doesn't require any special NIC hardware or switch configurations. Today, it's a good fit if you're running a supported operating system like ESX or Linux, including Linux-based hypervisors like Proxmox. It's a great fit if you want to use the same protocol both on-premises and public cloud. Linux and ESX are basically the only supported operating systems but they make up a lot of the servers deployed in the world. Where the scales tip against NVMe/TCP is in-band authentication and inflight encryption for our extremely security-conscious customers. Not all storage platforms support both.
The NVMe-over-fibre channel option
NVMe-over-TCP isn't the only option. In the customer attitude section above, you may have noticed a lack of interest in FC-NVMe. There was some initial interest (focus is on interest) in 2019 and 2020 but I haven't heard customers talking about it and the OEMs tell me protocol uptake is low. Why is this? I have my theories but they're just that, theories. I think the performance juice isn't really worth the squeeze (see here for my experience), and when coupled with potential compatibility issues, continuing with SCSI over FC is not slowing down corporate innovation. In my testing, the biggest advantage is in host CPU - moving to NVMe gives an appreciable amount of CPU back to the servers, particularly for small block workloads. Is average performance better? Sure, but overall I didn't see enough of a change in IOPS or MB/s to scream, "Migrate to NVMe tomorrow!"
Starting from its solid base, Fibre Channel continues to evolve. RDMA over Fibre Channel has been announced, which will improve performance by reducing buffer/copy movement between the protocol layers on the host, and 128GFC products are expected to appear this year. What adoption of either will look like is anyone's guess. I don't see customers rushing to replace their 16 and 32G ports with 64G. Cost is a major factor, particularly the optics, so 128G will probably be similar; deployment of 128G will be reserved for things like ISLs. Also, not all storage systems offer even 64G ports yet so 128G is likely a few years away from mass adoption. FC has proven itself reliable and, as an air-gapped network, offers great security.
There is movement toward IP NVMe protocols, specifically NVMe over TCP, but it's taking the roundabout detour through iSCSI. Our customers are changing things a little more slowly, keeping SCSI but doing it over IP rather than a straight cut to NVMe/TCP, and they seem to be skipping NVMe/FC altogether. The cost-per-gigabit-per-second of IP switches is just better than FC and it's enough of a draw to make the movement happen.
My five-year outlook
Where do I think this will be in five years? I'm certainly no Nostradamus, but here's what I predict:
- Ethernet will eat into Fibre Channel's market share; I see it starting already. The cost of the switches, the number of people that know how to effectively manage Ethernet networks and having a ubiquitous, all-purpose data center fabric is too attractive. Plus, the cost-per-gigabit-per-second is far better with IP.
- For array-to-host traffic, NVMe/TCP will be the leader for greenfield deployments. Yes, the IP RDMA protocols, iWARP and RoCE, have efficiency advantages. Those come at the cost of requiring NICs that support RDMA (rNIC) and specialized network configurations. I do think RoCE has a play for highly sensitive workloads or controller-to-drive-enclosure connections where unique configs are workable, but general-purpose traffic will strongly favor NVMe-over-TCP.
- As customers start refreshing gear, they're going to target 25-100Gb/s networks for storage traffic. 25Gb/s seems to be more than adequate (see 'Tying it Together' here) for most workloads today so for all but the most demanding workloads, these speeds have a long life ahead.
- Fibre Channel is sticking around but will not be the global storage standard. My timeline here is uncertain. Things like mainframe use FICON and, coupled with its overall security and reliability, will continue to use FC for the foreseeable future to maintain the mainframe's never-say-die uptime.
Altogether, it's an interesting time in storage and IT in general. We see major deck reshuffling in the OS and application delivery space, significant interest in globally dispersed data, and, of course, the topic of this article: storage fabrics. Whatever happens, it's an 'and' conversation. I believe there's a spectrum of fabric usage, from all-IP to all-FC; where you land will be uniquely yours. One thing is certain: NVMe is here and will take over from SCSI. I look forward to watching this space evolve over the next decade. For now, I suggest exploring the protocol; virtual machines and their software initiator make this low-touch. You can utilize our ATC to try things out in a safe place, either hands-on, completely hands-off or somewhere in the middle. If you want to have a deeper discussion about protocols and fabrics, feel free to connect directly with me and my team or your account team.
* As SCSI moved from original, to fast, to ultra, and, with the introduction of USB, the only things using SCSI were block devices; it was originally a versatile bus, supporting things like scanners, printers and audio samplers.