The IT industry is one of those industries that is rife with “buzz words” – convergence, hyperconvergence, software-defined this and that, etc., etc. It can be a challenge for even the most dedicated IT professionals to keep up on all the new trends in technology, not to mention the new terms invented by marketeers who want you to think that the shiny new product they just announced is on the leading edge of what’s new and cool…when in fact it’s merely repackaged existing technology.
What does it really mean to have “software-defined storage” or “software-defined networking”…or even a “software-defined data center? What’s the difference between “converged” and “hyperconverged?” And why should you care? This two-part post will suggest some answers that we hope will be helpful.
First, does “software-defined” simply mean “virtualized?”
No, not as the term is generally used. If you think about it, every piece of equipment in your data center these days has a hardware component and a software component (even if that software component is hard-coded into specialized integrated circuit chips or implemented in firmware). Virtualization is, fundamentally, the abstraction of software and functionality from the underlying hardware. Virtualization enables “software-defined,” but, as the term is generally used, “software defined” implies more than just virtualization – it implies things like policy-driven automation and a simplified management infrastructure.
An efficient IT infrastructure must be balanced properly between compute resources, storage resources, and networking resources. Most readers are familiar with the leading players in server virtualization, with the “big three” being VMware, Microsoft, and Citrix. Each has its own control plane to manage the virtualization hosts, but some cross-platform management is available. vCenter can manage Hyper-V hosts. System Center can manage vSphere and XenServer hosts. It may not be completely transparent yet, but it’s getting there.
What about storage? Enterprise storage is becoming a challenge for businesses of all sizes, due to the sheer volume of new information that is being created – according to some estimates, as much as 15 petabytes of new information world-wide every day. (That’s 15 million billion bytes.) The total amount of digital data that needs to be stored somewhere doubles roughly every two years, yet storage budgets are increasing only 1% – 5% annually. Hence the interest in being able to scale up and out using lower-cost commodity storage systems.
But the problem is often compounded by vendor lock-in. If you have invested in Vendor A’s enterprise SAN product, and now want to bring in an enterprise SAN product from Vendor B because it’s faster/better/less costly, you will probably find that they don’t talk to one another. Want to move Vendor A’s SAN into your Disaster Recovery site, use Vendor B’s SAN in production, and replicate data from one to the other? Sorry…in most cases that’s not going to work.
Part of the promise of software-defined storage is the ability to not only manage the storage hardware from one vendor via your SDS control plane, but also pull in all of the “foreign” storage you may have and manage it all transparently. DataCore, to cite just one example, allows you to do just that. Because the DataCore SANsymphony-V software is running on a Windows Server platform, it’s capable of aggregating any and all storage that the underlying Windows OS can see into a single storage pool. And if you want to move your aging EMC array into your DR site, and have your shiny, new Compellent production array replicate data to the EMC array (or vice versa), just put SANsymphony-V in front of each of them, and let the DataCore software handle the replication. Want to bring in an all-flash array to handle the most demanding workloads? Great! Bring it in, present it to DataCore, and let DataCore’s auto-tiering feature dynamically move the most frequently-accessed blocks of data to the storage tier that offers the highest performance.
What about software-defined networking? Believe it or not, in 2013 we reached the tipping point where there are now more virtual switch ports than physical ports in the world. Virtual switching technology is built into every major hypervisor. Major players in the network appliance market are making their technology available in virtual appliance form. For example, Watchguard’s virtual firewall appliances can be deployed on both VMware and Hyper-V, and Citrix’s NetScaler VPX appliances can be deployed on VMware, Hyper-V, or XenServer. But again, “software-defined networking” implies the ability to automate changes to the network based on some kind of policy engine.
If you put all of these pieces together, vendor-agnostic virtualization + policy-driven automation + simplified management = software-defined data center. Does the SDDC exist today? Arguably, yes – one could certainly make the case that the VMware vCloud Automation Center, Microsoft’s Azure Pack, Citrix’s CloudStack, and the open-source OpenStack all have many of the characteristics of a software-defined data center.
Whether the SDDC makes business sense today is not as clear. Techtarget.com quotes Brad Maltz of Lumenate as saying, “It will take about three years for companies to learn about the software-designed data center concept, and about five to ten years for them to understand and implement it.” Certainly some large enterprises may have the resources – both financial and skill-related – to begin reaping the benefits of this technology sooner, but it will be a challenge for small and medium-sized enterprises to get their arms around it. That, in part, is what is driving vendors to introduce converged and hyperconverged products, and that will be the subject of Part 2 of this series