What is SDx and its effect on data center hardware?

AvatarWritten by | Uncategorized


By Joe Vidal
Director & Chief Technologist, Channel
NA Hybrid-IT Business Group
Hewlett Packard Enterprise
 
In this article, we’re going to tackle both a basic understanding of what software defined everything – SDx – is and what it means for the future value, perceived or real, of vendor differentiation in edge hardware components.

Software Defined Everything

First of all, let me start by defining the critical terms associated with SDx – or software defined everything – within the data center.
When we reference SDx, we’re talking about software defined x; where “x“ is a variable that refers to anything (or even everything) within your data center or IT infrastructure. So, when we discuss software defined data center designs, we are naturally referring to similar concepts and technologies that leverage all the same aspects of SDx in order to deliver a completely automated data center that responds to, and is automated via, software defined rules. And this means that the software contains rules that define the availability and/or ability to scale up or scale down the infrastructure based on pre-defined needs or triggers across the application stack or user load.

SDx EBook

Check out Arrow’s new software defined e-book!

How Does All This Relate to a Move Toward the Cloud?

Many IT managers (or even business managers) decide that the only way they can possibly enable the business to scale up fast enough is to “go to the cloud.” Many businesses don’t have additional compute and storage resources simply sitting around in case there’s a near-immediate need to scale. Therefore, they determine that the easiest thing to do is to swipe their credit card and get the immediate virtual compute and/or virtual storage resources from a cloud service provider. That can obviously work for many business applications. But, what happens when the business rules that allowed you to “go to the cloud” change, dictating a more expensive secured environment, or even more storage, or more administrator-access than originally contracted within the cloud service level agreement?

This brings us back to the original interest or business needs that caused the initial look at public cloud as a service (or the need for an SDDC) in the first place. Most business managers really are simply interested in a move toward hybrid cloud computing, as opposed to a strictly public cloud strategy. The real benefits of cloud aren’t necessarily the move toward strictly shared, public cloud or external, public-based resources, but in the cost savings brought to the business from cloud automation or software defined computing where “computing” is the x. This is delivered by finding the right mix of hybrid cloud resources, both private and public, to meet the business demands for security and availability/growth expectations, while utilizing highly leveraged data center resources. In other words, these are data center resources that are approaching 100% utilization while still delivering high-availability.
Now, this auto-scaling aspect can be tricky, because most businesses can’t afford to simply have additional hardware resources sitting around, unused for an indefinite time. That’s why several manufacturers and system integrators (channel resellers) have begun offering various forms of “Instant Capacity” or iCap as a service. This delivers those additional compute and/or storage resources in a flexible delivery model that allow them to hold it on-premises for a pre-defined period, prior to use, without having to incur the cost of those assets until they’re needed. 
This flexible capacity or “FlexCap” delivery model is that magical piece of the SDx puzzle that enables clients to deliver private cloud services while leveraging public cloud economics and agility! Finally, when we can do all these things together, we are delivering software defined everything or SDx.

What Are the Risks and Implications for the Hardware Being Automated?

The common misperception is that once you move to a completely software defined data center or hybrid cloud service environment, there is no value left for the hardware manufacturers to build or add into the hardware itself. Nothing could be further from the truth. And, by the way, the same is true of big data solutions that leverage IoT or the internet of things. It will continue to be incumbent upon the hardware manufacturers to include Self-Monitoring Analytics and Reporting Technology (SMART) into as many aspects of the infrastructure as possible, while still being competitive on price. Now, let me explain the full meaning of that last sentence.
The reference to leveraging SMART technology is more than simply using intelligent technologies. I’m referring to the usage of “Self-Monitoring Analytics and Reporting Technology” as originally patented by Compaq, now Hewlett-Packard Enterprise, back in the mid-90s. This inherently intelligent capability enables the hardware to monitor itself and report back hundreds of key utilization and performance metrics up to the software layer; which in turn, ties that back to the private cloud (or software defined) abstraction and automation layers within the data center itself.
The key is to tightly integrate the reporting of not only the performance metrics, but also the key thresholds that, when exceeded, tend to lead toward service outages. Therefore, this inherently intelligent hardware layer enables a higher level of availability, within the private cloud instance or within the user’s private data center, while delivering all the same instantaneous scaling capabilities offered by the public cloud service providers.
It’s important to note that while we’re talking about delivering software defined everything, there are some things that are better left to hardware. Here’s a short list:

  • RAID array control of disk subsystem vs. software defined array functionality
  • Encryption of data at rest and/or data on the fly
  • TOE NIC capability (TCP/IP offload engine) function that performs the TCP/IP packet assembly and disassembly within the Network Interface Controller vs. having the software do this in main memory
  • On-board management and security, enabled via an on-board management processor, such as HPE’s Integrated Lights-Out processor

It’s imperative that you strive to control and perform these functions in hardware, in order to free up the CPU and main memory (RAM) to increase the number of VMs that can be hosted on any compute node. And let’s not forget that it’s much easier to protect an ASIC from a hacker than it is to secure the software.
Separate physical management processors add higher levels of security than any singular security software layer are capable of delivering. Furthermore, this is how recent enhancements in the HPE iLO-5 processor deliver NSA-level security algorithms and capabilities, delivering the world’s most secure server technologies. These enhance the many security features natively available within the Intel Skylake Scalable processor family, along with added workload-specific tweaks to the BIOS, based on thousands of man-hours of benchmark testing.
These are all features that most public cloud service providers are typically NOT going to offer, since they tend to stick with middle-of-the-road baseline performance settings for the entire hardware stack (independent of client). Therefore, these types of hardware enhancements are predominantly features of PRIVATE cloud implementations, because they are simply too costly to implement as a one-off for each public cloud customer. Thankfully, recent enhancements to management automation, like HPE’s OneView, allow these BIOS performance tweaks to be called up via an automated template-driven server profile.  Thus, making them software-defined.
We keep hearing again and again from C-Level managers, saying “I don’t want to be married to the metal anymore.” They mean this in the traditional sense that they don’t want to deploy traditional IT components (compute, storage and networking) and then be in “technical debt” to those devices while someone in finance sweats those assets for 3-5 yrs. Rather, what I’m hearing is they want some “distance” between the workloads and where and when they have to run them. “The Cloud” gives them this capability and so does newer technologies, like hybrid IT.

 

What About HCI Appliances and Hybrid IT?

Lastly, there is a new technology in most Hybrid IT shops, called hyper-converged infrastructure (HCI). This is simply the combination of a software defined storage (SDS) array and virtualized compute (leveraging a hypervisor, such as VMware/KVM or Hyper-V) all in one clean, appliance-like package. Clusters of HCI appliances can offer in excess of (5)9s of availability = 99.999% up-time or better, and they are built on top of industry standard x86 server architectures, which lowers the cost of implementation, while lowering the risk associated with the use of proprietary hardware. However, please understand that they are NOT all created equally, nor do they all offer greater than (3)9’s of availability. It all depends on the included hardware components and the inherent uptime estimates of the individual components’ mean time between failure statistics.
Some vendors allow you to start with a single HCI appliance; but, this would obviously NOT offer any kind of high availability (or HA) by itself. When paired with one or more additional HCI appliances, whether using RAID-5 at the disk level and RAID 1+0 at the appliance level (versus erasure encoding technologies), these can offer in excess of your typical enteprise data center requirement of (5)9s availability.
Unfortunately, I can’t do the topic of HCI appliances justice in just two or three paragraphs. So, we’ll have to reserve the deep-dive on HCI for another article. At that time, we can get into the differences and needs for compression, de-duplication and optimization technologies and how these perform versus Azure Stack or the many custom HCI appliances, as well as VMware’s vSAN-Ready nodes.

The Bottom Line on SDx

The bottom line when you seek to implement SDx is that it’s important to leverage as much inherently intelligent hardware or built-in SMART technology as you can afford, while squeezing all the time-wasting, human intervention out of your data center infrastructure. When designed and implemented properly, a private cloud or software defined data center can absolutely approach and even exceed the same level of cost savings within a private cloud environment, as many public cloud service providers offer. More importantly, you get to determine at what performance thresholds each additional compute, storage, or application resource gets automatically added and placed into production, alongside your existing infrastructure. It shouldn’t matter if those additional resources are virtual or physical either.
It all comes down to the automation toolset (cloud automation suite), the tight coupling of the intelligent hardware layer (SMART technology), and ensuring the business rules are properly applied in order to guarantee the business applications and data center resources remain available when needed. And when all that is successfully integrated together, as a cohesive data center automation stack, you have successfully implemented a software defined data center that will enable you to deliver software defined everything.
Now, if you really want to get fancy, you can implement an Open Hybrid Cloud Orchestration element manager (automation layer) that will enable your private cloud implementations to flex outside the physical walls of your data center facility (and back), leveraging public cloud assets when needed.  This would deliver a more complete software-defined or hybrid cloud implementation within your software-defined data center.

Contact Us

For more information on SDx or HPE through Arrow, please email ecscloudservices@arrow.com or call 1.877.558.6677.
 

About Joe Vidal

Joe Vidal is currently the North America Director & Chief Technologist for Hybrid-IT to the Channel for Hewlett Packard Enterprise. Joe has been designing software and data center solutions for more than 30 years and frequently presents on behalf of Arrow and HPE at Gartner and IDC events around the world. You can find Joe’s orations of HPE’s data center technologies within the Arrow-HPE Virtual Specialist App in the mobile app store of your choice or out on the web at:  www.Arrow-HPEVS.com.
 
[contextly_main_module]  

Last modified: September 26, 2019