My presentation at Interop in Las Vegas on May 11, 2011, focused on the protocols that will underpin converged storage networking in the future. My topic, assigned by network computing editor Mike Fratto, was “FCoE vs. iSCSI – Making the Choice.” Although this sounds like a grand competition between the two protocols, my take on the subject is very far from that idea. Rather than a battle, the rise of FCoE and iSCSI is part of the ascendance of convergence of storage and data networking on Ethernet.
“The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.”
I began my session by pointing out that I am neither a vendor nor protocol cheerleader and don’t really have a horse in the race in terms of a transition to FCoE, iSCSI, InfiniBand, SAS, or any other protocol. Frankly, I don’t see this as a race, and I don’t care who wins if it is one as long as IT infrastructure progresses to a more flexible state.
“FCoE vs. iSCSI – Making the Choice” from Interop Las Vegas 2011
Converging on Convergence
The important aspect of any discussion of FCoE is not the protocol itself but the underlying shift away from specialized storage networks converging on Ethernet. ISCSI began this trend almost a decade ago, and the Ethernet roadmap leaves Fibre Channel in the dust.
I see three key elements converging to bring, if you pardon the pun, convergence of data and storage networking:
- The wholesale adoption of Intel compatible processing architectures
- A shift toward open systems (Windows and UNIX) for application processing
- And the widespread adoption of IP as an internetworking protocol.
None of these “trends” is surprising or even questionable: Intel compatible open systems servers using IP dominate modern data centers.
Given this dominant processing architecture, Ethernet is a logical choice as an interconnect. No other network protocol even comes close to the market share, compatibility, and support for Ethernet.
Then we must consider the factors that drive convergence of networking protocols. After all, we have long seen a variety of different protocols in niches such as storage, voice, video, WAN, clustering, and other areas. But virtualization of servers, the need for consolidation to reduce port count and cabling, and a continuing thirst for better performance makes convergence on a single protocol a logical step for these and other areas of IT infrastructure.
If we converge on Ethernet, much will change both inside and outside the data center. Server managers will see greater flexibility and mobility of virtualized servers and blades, as well as increased performance overall. Storage managers will shift from managing esoteric networking protocols to a focus on data management and array performance. But network managers will bear the brunt of the shift, with a wider sphere of influence and new headaches from workloads that do not behave like conventional LAN applications.
The Performance Picture
Turning back to the face of storage networking, we see that one major driver for convergence is pure performance. Although Fibre Channel has an impressive roadmap, with performance doubling again and again, it can’t hold a candle to Ethernet. With historical leaps of an order of magnitude and performance, Ethernet will soon leave Fibre Channel well behind.
When iSCSI first appeared, it was hitched to fairly unimpressive Gigabit Ethernet even as Fibre Channel networks made a transition from 2 to 4 Gb. But iSCSI made a quantum leap in performance this year, transitioning to 10 Gb Ethernet even as Fibre Channel networks moved to 8 Gb. ISCSI FCoE will continue benefiting from Ethernet performance improvements in the coming years, transitioning to 40 Gb and 100 Gb. This will make 16 Gb and 32 Gb Fibre Channel look slow by comparison.
One area that is often overlooked in terms of performance is latency of I/O operations. Although iSCSI over 10 Gb Ethernet can carry 50% more data than 8 Gb Fibre Channel (thanks to more efficient encoding), it also benefits from drastically lower latency. It can handle 50% more packets than 8 Gb Fibre Channel or 10 times as many as Gigabit Ethernet. In other words, in a shared virtual environment, 10 Gb Ethernet allows more systems to get more work done in the same amount of time.
Ethernet Enhancing Data Centers
But performance is only half the story of converged Ethernet. It also supplies server connectivity, reducing the all too frequent situation where configuration and location of servers is dictated by cable availability rather than application need. This will change the face of the data center, encouraging the use of blade servers, virtualization, and flexible (dare I say “cloud”?) infrastructure. It will encourage mobility of machines, especially virtual ones, and demand new networking protocols like OpenFlow.
Ethernet required a serious upgrade to handle this workload, however. Although iSCSI works fine over just about any network, thanks to TCP/IP, FCoE and similar protocols require flow control and guaranteed lossless data delivery. This led to the development of data center bridging protocols (DCB), including priority flow control, bandwidth management, and congestion management. With the first two of these now widely available and the third following shortly, Ethernet is ready to take center stage.
FCoE vs. iSCSI
With discussion of convergence out of the way, we can finally talk about making the choice between iSCSI and FCoE. There are four main reasons to choose one protocol or the other:
- Data center strategy
- Performance needs
- Desire for compatibility
- Cost concerns
Each of these is a valid reason to pick FCoE or iSCSI in any given situation, and none is a drop—dead decision-maker. There are cases where FCoE will be cheaper than iSCSI and vice versa, for example.
Regardless of the choice between these two protocols, one element remains the same: SCSI. Nearly every enterprise block storage protocol is based on SCSI, and it is one of the seminal technologies that enabled the development of enterprise storage as an industry. Every enterprise block storage protocol, including FCoE, iSCSI, SAS, and plain old Fibre Channel, is really a transport for SCSI.
This makes the selection of protocol less relevant to operating systems and applications, since all will “see” storage the same way. There are major differences between the three SAN protocol choices, in terms of routability, availability of host and initiator hardware and software, maturity, and the availability and selection of management tools.
iSCSI has a more robust support matrix than Fibre Channel over Ethernet, with hardware and software drivers available for nearly every operating system. It is widely supported with mature storage systems available from nearly every vendor. Green field SAN designs with no existing Fibre Channel infrastructure should look no further: iSCSI is a great choice for new storage networks.
The selection of FCoE, on the other hand, is more about evolution from Fibre Channel in enterprise storage networks. There is a threefold path for Fibre Channel architects: They can continue with end-to-end Fibre Channel, and Ethernet and FCoE at the edge, or attempt to build out an end-to-end FCoE SAN. This last option is only recently possible, and is by far the least popular model for Fibre Channel architecture at the present time but will become dominant eventually.
Making the Choice
There are good reasons and bad to pick one protocol over the other, and none rises to the level of religious conviction one might see perusing blogs and tweets on the subject.
FCoE is an evolutionary transition for organizations that already have a large installed base of Fibre Channel equipment, tools, and skills. These environments can incrementally adopt Ethernet as an edge protocol while they continue to leverage the enterprise Fibre Channel storage arrays they already own. Strategically, FCoE makes perfect sense for users of “blocks” or “stacks” from vendors like Cisco, EMC, HP, and NetApp. But FCoE remains somewhat unproven, and some supporting protocols, like congestion notification and so-called Ethernet fabric technology, are immature at best when it comes to interoperability.
One common refrain when comparing FCoE and iSCSI is the efficiency of the protocols. Packaging SCSI in TCP and IP can’t be efficient, can it? But an analysis of the protocols reveals that absolute bit efficiency is very similar between Fibre Channel, FCoE, and iSCSI. Tests by Dell’s Tech Center and others show that iSCSI is fairly efficient in terms of data throughput and CPU utilization as well.
Stephen’s Stance
iSCSI is an excellent choice in situations where Fibre Channel investment is nonexistent or badly in need of wholesale upgrade. It will continue to grow based on ease of use, low cost and high performance, and widespread support, in the transition to 10 Gb Ethernet could not be simpler. FCoE, on the other hand, is likely to take over in high-end enterprise shops. It is relentlessly promoted by major vendors, and it seems that they will force the upgrade eventually. But some areas are still not ready for prime time, and buyers should beware of grandiose promises at this point.
In counterpoint, one may ask the question of why we chose Ethernet at all. It required much work, and unnatural acts like DCB, to prepare Ethernet to become the dominant protocol for convergence. Why not use InfiniBand instead, since it already works, has widespread implementation, excellent performance and scalability, as well as interoperability and hardware availability? Price is one concern, but the major factor is far more basic: No one doubts that Ethernet will eventually ascend and overcome its obstacles. It is a foregone conclusion.
In retrospect, many alternative protocols might have been better suited to convergence, including ATM and even Token Ring. Although the topic of Fibre Channel over Token Ring (FCoTR) brings a smile to the faces of network and storage nerds everywhere, we all expect that fiber Channel over Ethernet (FCoE) and iSCSI will rule the day.
Photos by Peter Tsai
Watch the presentation from Interop (apologies for the poor camera angle and sound!)
Dmitri Kalintsev says
Stephen,
Thanks for the presentation – nice to have it all in one place.
I have a question: you’re suggesting that “Green field SAN designs with no existing Fibre Channel infrastructure
should look no further: iSCSI is a great choice for new storage
networks.”. What I’m reading elsewhere however is that one major area where iSCSI lacks is the management tools; and iSCSI infrastructure with a sizable number of targets and intitiators quickly becomes difficult to manage.
Would you care to comment on this?
Then there’s a second area to keep in mind when going “all in” iSCSI: storage administrators will have to rely 100% on their network counterparts for zoning and partitioning (IP addressing, routing and/or VLANs). I see this as a potentially problematic thing, because (at least today) very often “network guys don’t ‘get’ storage” (and I’m saying this as a long-time “network guy”). At least with FCoE storage guys remain largely in control of their domain, as an FCoE VLAN on converged trunks simply appear as inter-domain links in their FC network and the FCF context in the FCoE switches looking like just another FC switch to their FC management tools.
Would love to hear what you think of this. Thanks 🙂
Abdul Rasheed says
Thought-provoking discussion indeed! FC still could be the choice for another 5 years for those already invested widely in it. But I do see your point, iSCSI had a good come back after losing the initial race.
Jon Hudson says
Great presentation! Couple points
Between 40Gb FCoE/iSCSI and 32Gb FC I think price will end up being the issue. Assuming that 40Gb Ethernet transceivers come down fast enough, you will see the vertex there. 100G I just don’t see coming down fast enough.
This is the only statement that bothers me:
“One area that is often overlooked in terms of performance is latency of I/O operations. Although iSCSI over 10 Gb Ethernet can carry 50% more data than 8 Gb Fibre Channel (thanks to more efficient encoding), it also benefits from drastically lower latency. It can handle 50% more packets than 8 Gb Fibre Channel or 10 times as many as Gigabit Ethernet. In other words, in a shared virtual environment, 10 Gb Ethernet allows more systems to get more work done in the same amount of time.”
In my tests I have never been able to get iSCSI over ~8.6Gb. With FCoE I’m able to get 9.7Gb (9734.734532 to be specific). While 8Gb FC will top out at about 800MB/s (or 6.5Gb to keep values equal) 16Gb FC uses the same 64/66 as 10Gb ethernet. So you end up with approx
8Gb FC= ~6.5Gb/s
10Gb iSCSI = ~8.6Gb/s
10Gb FCoE = ~9.7Gb/s
16Gb FC = ~12.8-13.6Gb/s (vendor dependent)
So while I agree 10Gb ethernet has a head over 8Gb FC, it should its a bigger pipe. But then 16Gb FC jumps it again.
But what I’m really asking is for the math that gets you 50% gain between 10Gb iSCSI and 8Gb FC. I want to understand how you get there. Not saying it can’t be true, just want to see the math 🙂
Jon Hudson says
OHHH!!! I get it. You are saying that
9.7-.6.5=3.2
50% of 6.5 is 3.25. So add 50% of 6.5 to 6.5 you get 9.7
I was thinking in the other direction, take 35% of 9.7 away and you get 6.5
However, if you use that math, then 16Gb FC is a bigger jump over 10GbE (at 3.9) than 10GbE over 8Gb FC (at 3.2). But I’m sure that will also be put as a percentage to the benefit of which ever company publishes it 🙂
For me though, the latency is where things get really interesting =)
Jon Hudson says
Ok, slide 35 is an issue. That is a VERY slanted slide. Why on earth did that Dell guy (who seems to have a real thing for iSCSI) show iSCSI at 1.5 & 9k MTU and FCoE at only 2.5K MTU? Every setup of FCoE I have seen is at 9k MTU.
Can someone comment on what the fastest iSCSI they have seen is? Best I have seen is about 8.6Gbps, but I found a credible source online claiming 9.28Gbps.
Dmitri Kalintsev says
Here’s something from my links stash: http://www.unifiedcomputingblog.com/?p=108 , comparison of native FC vs. FCoE overheads.
Etherealmind says
Another problem with iSCSI in the early 2000’s was that the convergence of IP after a network failure wasn’t suitable. Although the burst of iSCSI startups of the time hoped that this would be fixed, the concept of a storage network was so niche and unnecessary that none of the networking vendors could find a market for it. In the day, an outage of a minute or two was acceptable.
Now that data is far more sensitive to loss or outage, customers are willing to invest in solutions and support new technologies that work faster. And this has led to the foundations of the storage industry and the silicon that allowed FibreChannel to exist, and now converged networking.
In a sense, FibreChannel and iSCSI were solutions to problems that didn’t exist. In the end, FibreChannel was more successful in the very high end of the marketplace with a huge price tag and no one wanted a storage network enough to make them cheap.
Today, it’s different. Storage is cheap enough that it can be stacked into big piles of spinning rust and shared with high performance silicon chips. Chips that didn’t exist and were expensive to develop.
FCoE remains a way to transition legacy FC into modern Ethernet architectures, as you say here. And iSCSI remains underinvested by the manufacturers and needs urgent development work. I wonder when the storage industry will start to move faster and solve it’s own problems. Instead of complaining about iSCSI, why not develop something new and more effective ?
The lack of innovation by the storage vendors in networking is astounding. Surely they should be driving this, not Cisco ?
Fred says
Speeds and feeds are one point only when considering future storage architectures. I continue to get miffed by IP and data networking “experts” who think that they know the first thing about Storage Networking. The one key discussion that continues to be missed is the requirement for availability of mission critical applications. Whether iSCSI or FC, dual redundant independant architectures will continue to be a requirement for the world’s enterprise storage architectures. Granted, it may make sense to move some applications of shared storage to an ethernet transport, but until security of core data and its availability are resolved to avoid the DOS on IP solutions, FC, iSCSI and FCoE will be built in the future as they are today.
So let’s move on from the world will be all Ethernet any time soon. I agree with the latest postings that state that price is king. 100 GbE is a long ways from being a practical use case to compete with 16 GB FC.
Dmitri Kalintsev says
Greg,
FCoE is *not* about “transitioning of legacy FC into modern Ethernet architectures”. It is about convergence – NIC/HBA, cabling and switching equipment.
Here’s an excellent post by J Metz, which explains this very well:
http://blogs.cisco.com/datacenter/converged-networks-vs-mononetworks/
sfoskett says
Regarding scalability of iSCSI vs. FCoE SANs, I admit that FC tools win out. Most large SANs are FC-based, and FCoE leverages the same management tools, technologies, and processes. This is a major “win” for FCoE in large enterprise, IMHO, along with the fact that these environments already have a sizable FC investment to protect. One audience member asked this same question, and this was my answer. But I also pointed out that the single largest block storage network I ever saw was an iSCSI SAN!
Indeed, network guys don’t always “get” storage. This is an issue for both iSCSI and FCoE in my opinion, since both will likely leverage network guys a little or a lot. It’s possible that storage folks will retain SAN control with FCoE, but this is likely due to two factors not having much to do with the protocol:
1) FCoE will likely be deployed in large shops with enough talent to allow a storage person to remain in control of the FCoE fabric and tools
2) Network guys won’t (initially at least) be willing/able to learn the new tools and concepts required to take over the SAN
Long term I do see network guys taking over the SAN regardless. That’s convergence. If this doesn’t happen, I begin to question the point of converging at all…
What do you think?
sfoskett says
The performance issue isn’t in the initiator or the wire, it’s in the target. Truthfully, today’s monolithic FCoE targets are faster (in my limited experience) than today’s 10 GbE iSCSI targets by a large margin.
Microsoft and Intel showed wire-speed (1,174 MB/s) and 1 million IOPS for iSCSI over 10 GbE. It was a definite “lab queen” test since no iSCSI target can push that sort of throughput, but there you have it.
More info:
http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/
http://download.intel.com/support/network/sb/inteliscsiwp.pdf
I’ll talk to Dell about their assumptions on overhead. Yes, they love iSCSI. For good reason.
Me? I don’t love anything! 🙂
Dmitri Kalintsev says
Stephen,
Thanks for this. In response to Greg’s comment below, I included a link to a post by J Metz, who described Cisco’s view of the convergence of data and storage network, and I (at least at this point) tend to agree with his position. Convergence is of hardware, not of architectures and software. FCoE is not about discarding FC’s way of doing things; it is about doing the same things that we do today, but with fewer cards / cables / boxes, and thus less physical space, power and cooling. Therefore the point of convergence is the reduction (and better utilisation) of physical resources. The fact that FCoE gives access to Ethernet’s speed roadmap is more of a bonus than a primary reason.
When I think of it, it does make sense, at least to me.
How does that sound?
sfoskett says
My advice when building an Ethernet SAN remains the same: It’s a SAN, not a LAN, so build it that way! The most common fault I see in iSCSI “SANs” is the use of an underpowered desktop or edge Ethernet switch. Build a decent network with decent components and architect it right (with MPIO) and you’ve got a real challenger to FC. But throw something together and you’ve got nothing.
cjcox says
Well… this presentation IS (without any doubt) a rah rah session for storage over one of the worst things imaginable, ethernet and IP. The presenter is right that nobody is going to rip out the awful network protocol we’re stuck with today. With that said, I’ve NEVER seen ANY 10Gbit iSCSI solution out perform a 8Gbit FC solution.. NEVER. And with that, I’ll say that all storage anomalies were on the iSCSI side and NOT on the FC side with regards to variables. Not saying that with the “right” choice of switches and NICs, etc… that if those mysteries can be understood, that the reliability issues won’t go away.. it’s just that instead of dealing with a limited number of vendors and variances, with ethernet, now you’ve entered a world with orders of magnitude of variables between vendors to vendors… even if ethernet and IP is “better defined”. So yes, some variances vendor to vendor on the FC, easily avoided by choosing your vendor…. maybe the same thing is required for iSCSI, etc…. my point is that even that is indeterminant. I mean, there are a gazillion offering just from ONE vendor in the ethernet space, and I’ve used more than a few lemons from Intel (picking on a large vendor that most thing doesn’t make mistakes).
So… even with the sloppiness of IP for storage.. I suppose the best argument for the future is that if you keep scaling up and FC does top out, even with the mess, ultimately IP wins.. who cares if it’s a mess, we have enough (to use the presenter’s words) “over specification” on the bandwidth side to make up for the mess.
But today.. it’s a mess…. tomorrow… I may change my mind. I was sort of hoping that FCoE would be “the answer”. That is, get rid of the storage over IP mess… but in all fairness it suffers from the some of the very same problems. So… FC works. FCoE works if you believe in an all Cisco environment. iSCSI works because it’s what your grandma setup with Best Buy equipment on crap equipment. If you want something that performs well TODAY and is reliable.. IMHO, FC is still the only choice.
FC SANs ARE EASY TO MANAGE… sheesh… tons easier to manage than iSCSI and definitely FCoE (which may be viewed as the worst of all worlds).
Limitations? Sure… I mean if you want your DIRECT attached storage riding over unreliable routed networks.. iSCSI is your answer. But my guess is that runnning block devices over a flaky connection might not be the wisest design… who knows, maybe I’ll change my mind on that someday.
FC, simple, reliable and best performance when talking about local storage area networks.
sfoskett says
Wow, what a comment in support of FC! I can’t say I agree with it all, but I’m definitely not opposed to FC generally, even though it seemed like it in this debate. But thanks for reading, watching, and commenting!
garegin says
that’s the power of iSCSI. you can run it on commodity hardware for SME uses. iSCSI even makes sense on a home network, because you are going to have all sorts of issues if you stick your application data on a file share. iSCSI is truly one of the best things that happened in the storage/networking world right next to SSDs and ZFS