Data Center is Better Thought of as a Computer

In 2013, David Strauss predicted the specific data center container solution for large business B2B and B2C provider Amazon to be futuristic. Amazon deployed multiple container solutions in what would later be referred to as a container farm or hive. In the industry, supportive predictions forecasted substantial gain in profits, partly due to much lower costs than previous data center models.  Now, into Q2 of 2016, we see a new trend between big names like Microsoft, Google and Amazon leaving the container solution behind due to scaling this approach in a rapidly growing market. Why would these large brands choose to shift away from the container virtualization model? If they’re switching, what are they developing anew?

Related: Sign up for our weekly newsletter to stay updated on our latest product reviews

A History

The container solution for data centers was originally very clunky and not efficient at their beginning in 2005. However, 2013 advances in this technology was revolutionary for its time. Picture the PODS moving container, but for data centers. These high-tech solutions could be bought fully equip with custom options for your every need. For big companies, these containers presented more than just quick, easily placed. Range of containers, for example, can be found here, including: IT standalone, All-in-One, Natural or Direct fuel Cooling, and more products of Ast modular.

Lynix Journal illustrates the effencies of container solutions below:

Figure 1. Traditional virtualization and paravirtualization require a full operating system image for each instance.
Figure 1. Traditional virtualization and paravirtualization require a full operating system image for each instance.
Figure 2. Containers can share a single operating system and, optionally, other binary and library resources.
Figure 2. Containers can share a single operating system and, optionally, other binary and library resources.

First, Google

With the sheer size of Google, it takes a massive and well composed structure to layer multiple data centers utilized across the holding company’s many platforms. In addition, layer on the speed requirements needed of said data center, you’re already scratching your head. Finally, place the cherry on top of a financial burden this enormous data center will cost to create and maintain and you’ve got a real struggle to solve.

“Ten years ago, we realized that we could not purchase, at any price, a data center network that could meet the combination of our scale and speed requirements.” -Google Representative

The implications of this need entail the removal of container solutions for a custom and well integrated approach. Changing from the historical approach to data center networks sending signals back and forth between users, Google took a whole new approach in 2015. Enter the methodology called “bisectional bandwidth” where now Google servers talk directly to each other. The results of this change allows literally thousands of back-end calls to take place from a single webpage. This was possible due to the innovative thinking going on at Google, seeing data centers as computers themselves, not just data storage.

Follow the Leader

Results of Google’s peak under the hood in August 2015 of their newly transformed data center enabled other companies to learn and apply across their organizations. We now see companies like Amazon and Microsoft moving in similar directions, as applicable to their structures. For the most efficient scale and ability to function anew, we must always think innovation first. Question all that you take for granted, after all, the data center is really just a large computer.