latest

Getting Hyper About Storage: New Options for the Data Center

Cisco forecasts that global Internet traffic in 2019 will be equivalent to 66 times the volume of the entire global Internet in 2005. Worldwide, Internet traffic will reach 37 gigabytes (GB) per capita by 2019, up from 15.5 GB per capita in 2014. That’s more than a doubling of traffic in just five years.

By Stefan Bernbo

All of this traffic is carrying more data than the world has ever seen before—approximately 2.5 exabytes of data every day, in fact. Where there is data, there is the need for storage, which creates a quandary for enterprises. How can they scale to meet this need, and how can they do it without breaking the bank?

Related: Cloud Heading Toward Data Center Domination

Traditional hardware costs too much at the scale needed to be effective. Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands. Software-defined storage (SDS) offers the needed flexibility. In light of the varied storage and compute needs of organizations, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below.

A Short History of Storage

The traditional storage approach housed storage and compute functions in separate hardware. Converged storage came along and combined storage and computing hardware to increase delivery time and minimize the physical space required in virtualized and cloud-based environments. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.

It did accomplish those goals and was an improvement on traditional storage. Converged storage infrastructure uses a hardware-based approach comprising discrete components, each of which can serve on its own for its original purpose in a “building block” model. Converged storage is not centrally managed and does not run on hypervisors; the storage attaches directly to the physical servers.

The next evolution is hyperconverged storage infrastructure, which is software defined. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual-machine based. The storage controller and array are deployed on the same server, and compute and storage scale together. Each node has compute and storage capabilities. Data can reside locally or on another server, depending on how often the data is needed.

Related: The Challenges of Supporting a Complex IT Infrastructure

Agility and flexibility increase thanks to hyperconverged storage, and that is what today’s massive data volumes demand. It also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more 1:1 scaling would use the hyperconverged approach, as would those that deploy VDI environments. The hyperconverged model is the storage version of a Swiss Army knife; it is useful in many business scenarios. It is one element that works exactly the same; it’s just a question of how many elements a data center needs.

Hyperconverged storage seems to be exactly what enterprises need. Why is there also hyperscale storage, then? It’s a new storage approach created to address differing storage needs. Hyperscale computing is a distributed computing environment in which the storage controller and array are separate. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required to build big-data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. Software-defined storage, however, now enables many enterprises to enjoy the benefits of hyperscale.

This hyper solution offers reduced total cost of ownership as one of its benefits. The reason is that it typically uses commodity servers, and a data center can have millions of virtual servers without the added expense that this number of physical servers would require. Data center managers want to get rid of refrigerator-size disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.

Related: Components Of The Software-Defined Data Center

Fluid Storage for the Future

Now, more storage choices are available, depending on the needs of the business at any given time. A hyperconverged storage model is essentially one box with everything in it; hyperscale has two sets of boxes: one set of storage boxes and one set of compute boxes. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could run as a virtual machine—which would make it a hyperconverged configuration.

Storage is now more flexible than ever, giving the architect freedom to do what best meets storage needs in a fluid manner. These hyper solutions can be combined rather than being an either-or situation. This capability helps enterprises feel confident that they will be able to future-proof their storage capacity and cost-effectively manage the high volumes of data that comes their way.