Iomega has been a staple of the desktop computing environment for decades, but the company’s products have never been quite at home in even small corporate data centers. That changes today with the introduction of the iSCSI StorCenter Pro ix4-200r. As of now, EMC’s SOHO storage subsidiary is a serious challenger in the small business and entry-level VMware ESX storage market.
It might look like the existing NAS 200rL, but the ix4-200r sports upgraded hardware and a new rev of EMC’s LifeLine storage software. This unit packs a serious punch, boasting full iSCSI target support for servers running Windows or Linux (or anything else with an iSCSI initiator) in addition to NFS, SMB, media streaming, print services, and just about every other protocol.
Although both Iomega and VMware are under the EMC corporate umbrella, it was a surprise to find that the ix4-200r is certified compatible with ESX using both iSCSI and NFS right out of the gate. This is the only inexpensive storage system to wear a VMware badge, and this alone will likely make it a fixture in small offices and VMware labs. The desktop StorCenter ix4-100 and StorCenter ix2 are already widely used in these environments even without iSCSI, after all. The ix4-200r provides a complete SAN-in-a-box, supporting multiple NAS and iSCSI shares with dynamic allocation of the internal RAID-5 protected storage.
Although aimed at the office, the ix4-200r retains the vast set of LifeLine capability we’ve seen in Iomega’s other offerings. This includes media streaming for UPnP (Twonky) and iTunes (Firefly), remote access, Active Directory support, and print services. The new unit even packs the more unusual Axis video surveillance capture capability. It sports two USB ports on the back and one on the front for expansion, data import, backup, or printers as well. Probably the best software feature is EMC’s Retrospect backup client, which was recently updated on the Mac platform.
The ix4-200r starts at just $1,799 (list) for 2 TB, and I expect resellers to dip well below that number. For comparison, Amazon currently sells the smaller non-iSCSI desktop 2 TB ix4-100 for $675 and the 1 TB ix2 for $268 and I’ve seen each for much less. I expect a street price of $1600 for the 2 TB rackmount unit – competing products from Buffalo and Netgear are priced and marked down similarly. The 4 TB model is priced $1,000 higher, perhaps unrealistically high given that the only difference is the use of 1 TB hard drive units instead of the 2 TB’s 500 GB drives. For comparison, Drobo just introduced their limited single-server 8-bay iSCSI DroboPro at $1,750 configured with four 500 GB drives. But none of these alternatives boast a spot on the ESX compatibility list, and I suspect this may be a deciding factor for many. Note that you can’t buy less than four hard drives in an ix4-200r, though the drives are easy to replace.
Iomega was kind enough to give me a preview of the ix4-200r at their offices, and I came away impressed by the new array and the company in general. They have a solid vision of the needs of the small office and are hard at work on products to meet them. Although the iSCSI support is not coming to the company’s other LifeLine-powered systems (the ix2, ix4-100, and Home Media) at this point, I would not be at all surprised to see it become a staple in future networked storage systems. A large gap remains below the EMC CLARiiON range, so I suspect that larger Iomega systems are on the way as well. As a potential buyer, I’d like to see Windows logo qualification, and Hyper-V support would be super as well. And as a Mac user, I’d love to see Time Machine support and for Iomega follow Drobo by offering a free iSCSI initiator – a guy can dream, right?
Updates and clarifications:
- Iomega has added the StorCenter Pro ix4-200r to their web site alongside the non-LifeLine StorCenterPro 200rL
- The ix4-200r will not be released until April 22, 2009
- The new rackmount ix4-200r is listed at $1799.99 for 2 TB and $2799.99 for 4 TB. I don’t expect to see either sell for less than a few hundred off those list prices
- The ix4-200r has been listed in the VMware ESX compatibility guide for a few days now for both iSCSI and NFS connectivity – I’m surprised no one noticed!
- Although it’s not mentioned in the press release, Iomega tells me that the StorCenter Pro ix4-200r does still support the BlueTooth file exchange found on its little brothers
More coverage:
- EMC’s StorageZilla posted his impressions as well: Iomega adds iSCSI, threatens war on us all
- Carlo Costanzo is excited to use this in VMware environments: EMC’s Low Cost SAN Starter for VMware (Iomega)
- Chris Mellor gives it a UK spin in The Register: Iomega opens sub-£2k box of storage tricks
- Duncan Epping is also excited about Home Lab Storage
BrentO says
WOW, so that’s why they wouldn’t put iSCSI in the smaller units, eh? Thank goodness I didn’t pick one of those up – I was this close last week.
sfoskett says
I’ve been told by a few vendors that the last-generation embedded CPUs used in lots of consumer network storage doesn’t have the power to offer serious iSCSI performance. I wouldn’t hold my breath on iSCSI being added to anything that doesn’t have it already. But it looks like it’s got a solid future in this space!
BrentO says
Heh heh heh – I respect your political correctness. “Told by a few vendors” – oh, you mean the same folks who want me to ditch my old storage appliance and buy a new one, right? Riiight.
“Serious iSCSI performance” – if I wanted that, I wouldn’t be buying sub-$1k gear.
Dang it, so much for my free lunch.
Jason Boche says
Awesome! Totally Awesome!
John says
Ok,
I see four slots on the front. 1TB or 500G B drives to get 4TB or 2TB RAW then. lET’S SEE, 38 random IOPS per spindle @ 20ms response and a RAID 10 wite penalty of 2 …ouch. These are definitely not enterprise class. You’ll hit a disk bottleneck long before iSCSI factors become a consideration in the performance calculus.
Still, this is targeted at SOHO. Buy two and some cheap host based replication software…
sfoskett says
They’re using RAID-5, not RAID-10. And I bet they use modern drives with at least twice the number of IOPS you cited. So it’s not that bad. But no, this isn’t going to unseat a cached enterprise array any time soon!
John says
Even if the IOPS/spindle is higher, that’s negated by the RAID 5 write penalty…
Where P is the performance in IOPS/spindle and N is the number of spindles in the array:
RAID 5 random write performance = P*(N-1)/4
RAID 10 random write performance = P*N/2
RAID 5 random read performance = P*(N-1)
RAID 10 random read performance = P*N
Yes, SATA drives have a MAXIMUM IOPS that is higher. The point is that an IOPS value is meaningless unless ascocated with an IO size and a response time. Sure, I can push a maximum of around 90 IOPS per SATA spindle at an IO response time in excess of 1 second. If each outstanding IO took a second to complete, my computer would appear frozen. A practical response time for random IO response time is 20ms for many applications and for databases in particular, closer to 10ms. I stand by those numbers.
In the product release, you’ll note they gave an example of an application that uses large sequential reads… meadia straming. Not pounding on Iomega, but this is standard vendor fare to put their product in the best artifically crafted workload light.
Still, this is targeted clearly for the SOHO market. How much random IO can a thread or two produce? Are home standards as exacting as enterprise standards? I think not. Add some cheap replication and a second unit and even I could see the potential for home office use. The price seems right. I wouldn’t rush out and attach one to the SQL or Oracle server backending my revenue generating Web application,or even Exchange server for that matter, (if I had one-not specifying that I do or don’t).
John
sfoskett says
Although I agree that these things are not going to be competitive with real enterprise storage systems, even at the low end, I just can’t agree that the performance will be that bad. Consider a CLARiiON, which are often configured with a single LUN on a single RAID-5 set of 4 SATA disks. How is this any different from an Iomega? Well, there’s lots more cache and processing power for one. But if the back end can’t cope, no amount of sleight of hand will matter.
My point is that lots of systems use 4-drive SATA RAID-5, and lots of these are in production/corporate environments, and lots don’t have crazy cache or processors. It’s not ideal, and not even high-performance, but it works. And the Iomega ought to work, too, in an appropriately low-demand environment. Even with Exchange or SQL Server or ESX. I certainly don’t expect 1-second IO waits. You just can’t tar something as unacceptable just because it uses RAID-5!
As for the disks, I still contend that modern SATA drives (of the kind to be used in the Iomega and similar devices) can handle much much more IOPS than you’re suggesting. Yes it’ll vary greatly by IO size, and yes it might dip to the 20s in some situations. But modern drives can easily average 80 or 90 IOPS with real-world business application loads. Look at Tom’s Hardware tests and see for yourself!
Most small offices use cheap bare drives, perhaps software RAID, and fast (not gigabit) Ethernet. I have no doubt that one of these low-end RAID systems would be a major major improvement for them! I have no doubt that an ix4 with iSCSI would be faster in all circumstances than an internal bare drive or two.
Thanks so much for the stimulating conversation. I really respect your thoughts, and hope you will continue to call me out and provoke discussion!
John says
The RAID 5 thing is all about the workload. RAID 5, with a write penalty of 4, is appropriate for workloads that have a high read/write ratio. Home directories would be a good example. There have been many studies of unsage patterns for user home directories last year, and the results were very appropriate for RAID 5, Even better probably RAID 6 fronted by some cache if the controller supports it.
As the read/write ratio moves closer to 1:1, the RAID 5 write penalty of 4 becomes a much larger factor in determining the spindle count required for adequate performance. Exchange 2007 with cached mode clients in particular has a read/write ratio very close to 1:1. For the same number and type of spindles, you’ll get twice the performance out of RAID 10 as you would RAID 5 in such an application workload environment. The rule of thumb is: If the read/write ratio of your application workload is less than the write penalty of your proposed RAID type, then your proposed RAID type is poorly suited for your application workload. You’ll simply waste spindles.
All that said, on the extremely low end of the small business environment, say 10 Exchange users @ 0.3 IOPS/user, you’re right; even a poorly suited RAID choice won’t make any difference. The IO load is too small to matter. When you start to scale, that’s when it gets ugly. Most small business owners I know have a dream, and for the most part a fairly solid plan, of expanding their business… Let”s not paint an overly rosy picture and end up putting a stumbling block in their way.
John
John says
The RAID 5 thing is all about the workload. RAID 5, with a write penalty of 4, is appropriate for workloads that have a high read/write ratio. Home directories would be a good example. There have been many studies of unsage patterns for user home directories last year, and the results were very appropriate for RAID 5, Even better probably RAID 6 fronted by some cache if the controller supports it.
As the read/write ratio moves closer to 1:1, the RAID 5 write penalty of 4 becomes a much larger factor in determining the spindle count required for adequate performance. Exchange 2007 with cached mode clients in particular has a read/write ratio very close to 1:1. For the same number and type of spindles, you'll get twice the performance out of RAID 10 as you would RAID 5 in such an application workload environment. The rule of thumb is: If the read/write ratio of your application workload is less than the write penalty of your proposed RAID type, then your proposed RAID type is poorly suited for your application workload. You'll simply waste spindles.
All that said, on the extremely low end of the small business environment, say 10 Exchange users @ 0.3 IOPS/user, you're right; even a poorly suited RAID choice won't make any difference. The IO load is too small to matter. When you start to scale, that's when it gets ugly. Most small business owners I know have a dream, and for the most part a fairly solid plan, of expanding their business… Let''s not paint an overly rosy picture and end up putting a stumbling block in their way.
John
j1mbo007 says
A question for the clever people
running 4 X 1Tb drives with Raid 10 having only 1TB left Will exchange 2010 handle