To hear this week’s storage industry news reports, one might think that Wagner’s fat lady came to Storage Networking World (SNW), singing her song as the iSCSI world collapses. Storagebod wonders what iSCSI’s death will look like. Chris Mellor at The Register says “Game Over” as NetApp, QLogic, Emulex and VMware join EMC and Cisco in singing the praises of Fibre Channel over Ethernet (FCoE). Mellor suggests that the protocol will devalue Dell’s EqualLogic investment, as if HP’s acquisition of LeftHand wasn’t enough, even as fellow Register-ite, Bryan Betts disagrees.
But The Register didn’t invent the “FCoE kills iSCSI” meme – it’s just natural to imagine that these two protocols would be in a fight to the death. And if it’s a duel, then this year’s SNW conference would seem to be the first volley, as EMC introduced a FCoE Connectrix switch (based on Cisco), NetApp announced the first native FCoE array, and everyone qualified Emulex and QLogic adapters. However, despite these announcements, it’s way too early to bury iSCSI!
FCoE and iSCSI are similar in concept:
- Both rely on Ethernet physical connectivity
- Both transmit SCSI packets
- Both are aimed at date center users
But there are major differences as well:
- iSCSI is routable in an IP network
- iSCSI can use IP services like IPsec
- Software initiators can give iSCSI connectivity to any server, regardless of hardware
- FCoE will require converged network adapters (CNAs), while iSCSI can run on any Ethernet adapter
- FCoE will start at 10 Gb, while iSCSI can operate at just about any speed
Looking at this list, one might realize that FCoE is really a competitor for faster-than-4 Gb Fibre Channel. It’s not just a data center product, it’s an enterprise (read high-end and expensive) product, and that’s exactly where it will flourish. I have no doubt that Cisco and Brocade will successfully transition their Fibre Channel product lines to FCoE, and that QLogic and Emulex will sell a gazillion CNAs. But what about the rest of the market?
VMware’s adoption has shown that there is a taste for shared, networked storage outside the rarified budgets of the enterprise world. So far, no storage protocol has won the midrange and virtual server market, with Fibre Channel, iSCSI, and NFS duking it out along side internal SAS and SATA and the odd InfiniBand and external SAS solution. Although CNAs and FCoE ought to work fine in the virtual data center, not everyone will have a taste for them. There will always be plenty of folks who just want inexpensive external networked storage arrays, and iSCSI is the best thing they’re likely to see any time soon!
And iSCSI isn’t done evolving. Folks like Mellor, Chuck Hollis, and Storagebod are lauding FCoE at 10 gigabit speeds, but seem to forget that iSCSI can run at that speed, too. It can also run on the same CNAs and enterprise switches. And although wide(r)-area SANs aren’t common, I’m beginning to see some interest in leveraging the routability and other advanced features of IP in the storage world. iSCSI still has some cards to play! And the non-enterprise space isn’t nearly as awful as some make it sound – it is and will remain a bigger, more diverse market than the high end, and there are some serious buyers that will never get into FCoE.
Right now, the SAN world is expanding like it hasn’t done for years. iSCSI deployments are accelerating, growing the whole market. Sure, FCoE will probably completely replace old-school Fibre Channel over the next five years. But it will have to share the market with the now well-established iSCSI. It looks to me like Dell and HP made smart investments.
Update: More coverage on the topic:
- Doug Rainbolt from Alacritech is skeptical of the drivers for FCoE
- David Dale from NetApp feels that FCoE is unlikely to intrude on the iSCSI “sweet spot”
See my posts on Gestalt IT for similar enterprise IT infrastructure commentary
http://cdplayer.livejournal.com/ says
Hi Stephen,
Have you seen any research done to compare how FCoE fares against iSCSI over long distances (100s of kms/miles)? It is absolutely possible today to get a long distance Ethernet service, so the question is not an empty interest.
Also, while FCoE does require 10GE UNI, does it also require full 10GE connectivity in between (I would expect not)?
Thanks 🙂
Stephen says
Despite what Wikipedia says, I don’t know of any reason that FCoE will not work over any old Ethernet link, provided you can physically cable it up through an appropriate switch. So you ought to be able to configure a Cisco Nexus to have a 10 GbE FCoE port and a crazy Ethernet/WAN port in the same VLAN, and it should work. But this is definitely not what the manufacturers intend. Similarly, it ought to be possibly to rig up a software initiator that allows any Ethernet NIC to be a FCoE HBA (just like the iSCSI software initiators) and use the protocol on your DLink desktop switch.
But I will eat my hat if any vendor ever supports any configuration like this. This is an enterprise protocol through and through, and the vendors intend to make big money on it, not have it be commoditized like iSCSI.
I imagine we will see FCoE MAN and WAN, but only through special hardware and in special vendor-approved circumstances. And of course, iSCSI works pretty well over WAN links right now…
http://cdplayer.livejournal.com/ says
Thanks for that. From what you say, is it safe to make an assumption that no comparison data yet exists for FCoE vs iSCSI over long distance?
Also, from my limited understanding, I would expect that main usage for iSCSI over WAN links would be asynchronous replication, rather than primary storage attachment, is this so?
Stephen says
You are correct. I know of no one who has tried FCoE over WAN links, and iSCSI is often used for replication (by the arrays themselves) but not often for wide-area connectivity. Storage isn’t all that forgiving to slowdowns and outages, after all…
http://cdplayer.livejournal.com/ says
Good, at least this means I’m on the right track of thinking. 🙂
Thanks again 🙂
http://cdplayer.livejournal.co says
Hi Stephen,
Have you seen any research done to compare how FCoE fares against iSCSI over long distances (100s of kms/miles)? It is absolutely possible today to get a long distance Ethernet service, so the question is not an empty interest.
Also, while FCoE does require 10GE UNI, does it also require full 10GE connectivity in between (I would expect not)?
Thanks 🙂
sfoskett says
Despite what Wikipedia says, I don’t know of any reason that FCoE will not work over any old Ethernet link, provided you can physically cable it up through an appropriate switch. So you ought to be able to configure a Cisco Nexus to have a 10 GbE FCoE port and a crazy Ethernet/WAN port in the same VLAN, and it should work. But this is definitely not what the manufacturers intend. Similarly, it ought to be possibly to rig up a software initiator that allows any Ethernet NIC to be a FCoE HBA (just like the iSCSI software initiators) and use the protocol on your DLink desktop switch.
But I will eat my hat if any vendor ever supports any configuration like this. This is an enterprise protocol through and through, and the vendors intend to make big money on it, not have it be commoditized like iSCSI.
I imagine we will see FCoE MAN and WAN, but only through special hardware and in special vendor-approved circumstances. And of course, iSCSI works pretty well over WAN links right now…
http://cdplayer.livejournal.co says
Thanks for that. From what you say, is it safe to make an assumption that no comparison data yet exists for FCoE vs iSCSI over long distance?
Also, from my limited understanding, I would expect that main usage for iSCSI over WAN links would be asynchronous replication, rather than primary storage attachment, is this so?
sfoskett says
You are correct. I know of no one who has tried FCoE over WAN links, and iSCSI is often used for replication (by the arrays themselves) but not often for wide-area connectivity. Storage isn’t all that forgiving to slowdowns and outages, after all…
http://cdplayer.livejournal.co says
Good, at least this means I’m on the right track of thinking. 🙂
Thanks again 🙂
http://technorati.com/people/technorati/storagebod says
I didn’t actually forget that iSCSI can run a 10GbE speeds; obviously it can run at the speed of the underlying network. I’ve run iSCSI over wireless before (I’m nuts)!
I think iSCSI will survive quite happily in it’s market sector; I think FCoE will probably end up dominating in the Enterprise Data Centre as the protocol of choice for block but I’m not expecting it to be huge in the SMB market.
I think FCoE will happen fairly quickly in the DC tho’; especially in the form of top of rack solutions. But iSCSI fatally wounded, not a chance at the moment.
http://technorati.com/people/technorati/storagebod says
And I forgot, http://www.open-fcoe.org if you want to play with a software initiator. I’ve not done so myself but I might have a go (told you, I’m nuts).
Stephen says
I’m just as nuts! The first thing I did when I got one of the prototype LeftHand arrays way back when was to plug it into a Linksys router and mount a LUN over Wi-Fi! 🙂
http://technorati.com/people/t says
I didn’t actually forget that iSCSI can run a 10GbE speeds; obviously it can run at the speed of the underlying network. I’ve run iSCSI over wireless before (I’m nuts)!
I think iSCSI will survive quite happily in it’s market sector; I think FCoE will probably end up dominating in the Enterprise Data Centre as the protocol of choice for block but I’m not expecting it to be huge in the SMB market.
I think FCoE will happen fairly quickly in the DC tho’; especially in the form of top of rack solutions. But iSCSI fatally wounded, not a chance at the moment.
http://technorati.com/people/t says
And I forgot, http://www.open-fcoe.org if you want to play with a software initiator. I’ve not done so myself but I might have a go (told you, I’m nuts).
sfoskett says
I’m just as nuts! The first thing I did when I got one of the prototype LeftHand arrays way back when was to plug it into a Linksys router and mount a LUN over Wi-Fi! 🙂
Gilles Chekroun says
One main difference between iSCSI and FCoE is the fact that FCoE still relies on FCP and so is seamless to FC networks. SAN administrators really like it because nothing is changed in the management tools for SAN zoning, multipathing etc…
FCoE is still FC while iSCSI is NOT.
makes a big difference and in my opinion on of the main reason iSCSI has such a small market compared to FC
Gilles
Gilles Chekroun says
One main difference between iSCSI and FCoE is the fact that FCoE still relies on FCP and so is seamless to FC networks. SAN administrators really like it because nothing is changed in the management tools for SAN zoning, multipathing etc…
FCoE is still FC while iSCSI is NOT.
makes a big difference and in my opinion on of the main reason iSCSI has such a small market compared to FC
Gilles
Mike says
iSCSI is targeting on small business while FCoE and Fibre Cnannel targeting on Enterprise.
You can see FcoE and iSCSI implementation examples at http://fcoe.ru