Software-Defined Storage: A Deeper Look

It’s refreshing to see a reasoned analysis of the software-defined storage trend, since so much of what is published is hype rather than reality. Stan Stevens, senior analyst at Technology Business Research (TBR), has published “Software-Defined Storage -- Peeling Back the Layers” in an attempt to shine some light on the emerging technology.

By Jim O'Reilly

According to Stevens, software-defined storage will be an important data center trend as organizations, under pressure to cut costs and rapidly increase storage capacity, look to reduce admin workload and take advantage of commodity hardware. Early adopters of SDS have been huge Internet companies like Google, and more recently, large corporate data centers.

Related: Handling Virtual Storage Challenges

Stevens describes a market in which traditional vendors are responding to the trend by pushing single-pane management approaches, which invariably come with ties to their other software and hardware products.

TBR’s view constrains the discussion of the broader aspects of SDS somewhat. It’s clear that traditional big-iron storage vendors are embracing SDS reluctantly and defensively, and are not focused on providing open general solutions. While Stanley doesn't say it directly, this dichotomy of being both “open” and proprietary has created a cloud of FUD around SDS.

He sees SDS mainly as an exercise in storage management, with a focus on single-pane control of heterogeneous environments as an end goal. My view is that this is the traditional storage vendors’ perspective because it provides vehicles for retaining control and locking customers into single vendor solutions. Vendor support for an open SDS ranges from desultory to lip service.

OpenStack is likely to fill the broad orchestration goal, though I would add that it will tend to be forward looking in doing that rather than trying to orchestrate 10- or 20-year old RAID arrays. In this industry, initiatives such as SNIA’s SMIS will provide the common language to talk to newer hardware.

Related: Solving the Mystery of Software-Defined Storage

When Stanley describes how traditional storage vendors are approaching SDS, I think he hits on a major issue. Traditional vendors are causing a lot of confusion about what SDS is or isn’t. Most of their products avoid the painful encroachment of true, cheap commodity hardware and instead are essentially bolt-ons to existing expensive solutions.

Traditional storage vendors are unwilling to accept that commodity hardware can be really cheap. Disk drives used by Google cost $30 per terabyte, while servers are rock-bottom, million unit, bare-bones solutions that have gross margins less than 10% for their makers. Established vendors are desperately looking to preserve 30% gross margin and $1,000+/TB drive pricing.

In reviewing traditional vendor positions on SDS, I think TBR has captured their dilemma. They are doing their best to act as a middle man between the commodity provider and the customer. This adds value only if the integration task is complex, and it’s worth noting that most of these traditional vendors have a veritable storm of SDS-related interfaces etc. to complicate things

However, TBR's review of the storage software segment of the market is too backward-looking. TBR’s view of their products tends to be placed in the context of the storage market as we’ve seen it over the last few years, where the dominant traditional vendors define the structure, and even the price and margin schemas we use. In reality, the fun part of SDS is that it is opening up completely new ways to achieve solutions using cheap commodity appliances, but as defined by low-cost, very high-volume makers such as Quanta, Supermicro and Lenovo, rather than Dell and EMC.

In general, TBR is a little timid in forecasting a UNIX/Linux type of revolution for storage. Stanley’s summary addresses price evolution, rather than the wholesale drop that results from web-priced components. This is in line with IDC and Gartner, which are heavily invested in the status quo, and doesn’t reflect the opportunity to undercut hardware and software costs that is beginning to occur in the mainstream (the mega-cloud service providers were there two years ago!)

Related: Why the Promise of Unlimited Cloud Storage Keeps Getting Broken

The proposed Dell and EMC merger highlights the sense of doom that traditional vendors sense in this area. They are going into a somewhat defensive posture, much as the mainframe makers did when UNIX moved to mainstream adoption. They hope for a gradual trend that they can adjust to, but the economic forces of SDS and other data center trends such as the cloud will make the shift abrupt.

In reflecting the gradual shift model, TBR has followed the “conventional wisdom” of the status quo side of the industry and downplayed the pressures making rapid change inevitable.