The hot story in the news this week is Volkswagen’s reported brazen cheating in diesel engine emissions testing. This brought to mind a host of similar occurrences, from Samsung/HTC cheating at benchmarks to alleged cheating in SPC enterprise storage performance testing. Cynics say we should just assume we’re being cheated, but is this a world in which we want to live?
Everyone wants to be the best, so outrageous claims of supremacy are as old as time. In IT, these claims often revolve around synthetic benchmarks chosen to highlight a system’s performance. Buyers have grown wary of these claims, smartly asking to try before they buy. But predictability is even more important than real-world testing, and this is particularly difficult for storage systems to achieve.
I’ve long been critical of poorly-executed performance comparisons and the “fastest is always best” mentality behind them. But, although it sounds inconsistent, I still love reading the performance “comparos” in Car & Driver, and I am committed to the belief that the enterprise IT world needs lab tests and performance comparisons.
Ever since Microsoft and Intel declared that the combination of Windows and Nehalem could deliver over a million iSCSI IOPS, I’ve been curious about just how they did it. What black magic could push that many I/Os over a single Ethernet connection? And what was on the other end? Now Intel has revealed all in a whitepaper, and the results are surprising!
HP recently commissioned Tolly Group to benchmark their BladeSystem c7000 against the Cisco UCS 5100. The short report focuses on two results, and reads like so many competitive benchmarks in the IT industry: Tolly focuses on metrics that highlight the strength of HP’s solution and the weaknesses of Cisco’s. What’s the real value of pinpoint maximum-performance benchmarks like this?