I’ve been thinking a lot lately about microprocessors, from the many-core CPUs that AMD and Intel introduced recently to the massively scalable GPGPU processing that’s taking machine learning by storm. After years of consolidation on commodity x86 CPUs, it seems that the computing paradigm is turning again to specialized offload processors. This trend towards heterogeneous computing will change the face of hardware, from mobile devices to the datacenter.
Considering the history of computing, from the enterprise to the home.
With the advent of AMD Threadripper and Epyc, we are about to see an explosion of PCIe lanes in the pro-sumer and datacenter market. Although many of those lanes will be taken up by conventional PCIe cards, some will be used for SSD’s (M.2 and U.2) or for external connectivity. This is where OCuLink might finally take off: As an AMD alternative to Thunderbolt for external PCIe peripheral connectivity.
As of today, EMC Corporation is no longer an independent company. Who thought we would see this day? From now on, EMC is simply a brand for parts of Dell’s Infrastructure Solutions and Services businesses. This marks a major shift in the enterprise storage world, for IT, and perhaps for American business in general.
Last week I headed to Austin, Texas to attend the semi-annual OpenStack Summit there. Along with the usual socializing, I was looking to understand the current state of the technology: What does OpenStack really mean these days, and where is it going? Let’s start with “free”. As “the Internet” is quick to point out, this critical word has multiple […]
I’m really excited about the prospects of memory-addressable flash. Moving flash closer to the CPU and addressing it as memory rather than block storage brings tremendous performance benefits, and is a once-in-a-generation radical change to system architecture. But questions remain as to how it can be integrated with today’s applications. Now Plexistor is here with a promising solution: Their “Software-Defined Memory” concept is a generic filesystem for storage, from NVDIMM to NVMe to SSD.
A few years back, I wrote an immensely-popular series of blog posts outlining the four things that were holding storage system performance back, and the ways to fix them. At the time, I created some presentation content to go along with these posts, and even considered pulling them into a white paper, but nothing came of that. Now, however, I’m pleased to announce that my Four Horsemen are accompanying me to the stage November 10, 2015 at the DeltaWare Data Solutions Emerging Technology Summit in Edina, Minnesota.
More than five years ago, I blogged about a “stupidly cool” terminal font. Now that Mac OS X isn’t a big cat anymore, I figured it was time to repeat that: If you’re an old-school computer nerd like me, Glass TTY VT220 is the coolest terminal font for Mac OS X!
Waves of innovation and waves of companies, crash on the storage market, but the same incumbent leaders and product lines survive for decades. Are things changing? It’s hard to see sometimes, but real progress has been made.
Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.
“One size fits all” doesn’t work for Ethernet, but this proliferation of speed options sounds like trouble without automatic capability negotiation. It’s nice to have options, but the IEEE must remain focused on interoperability and rein in the interests of the various companies proposing next-generation Ethernet technologies.