Software-defined data centre: The path toward intelligent infrastructure

By Bill Stevenson, Executive Chairman of Sanbolic.

  • 9 years ago Posted in

New technologies, including cloud, mobility, social, and analytics, are enabling important advances in business capabilities. Intelligent IT infrastructure – infrastructure that can effectively deliver on the potential of these capabilities--offers immense benefits. While few organizations have fully achieved this objective, many of the components are being put in place. Over the next several years, leading organizations will move much closer to the promise of intelligent infrastructure thanks to the emergence and adoption of the software-defined data center.

Mapping the Opportunity

Intelligent infrastructure is built using standardized components that can be orchestrated into a system that is proactive and predictive of the needs of an enterprise’s customers, its organization, and its supply chain. In the past, this has been difficult to do. Little information was captured, and the data that did exist was often locked up in inflexible software/hardware silos that made it difficult to use information to inform decisions and anticipate customer requirements. Much of my early career as a strategy consultant was spent extracting high-priority information from the silos and using crude spreadsheet tools to help senior management teams see the patterns in the data to improve decision-making. The value they found in using that data more effectively (even the 10 percent we focused on) easily covered the cost of my team of well-paid professionals. However, paying employees or consultants to spend hours sifting through data is an extremely expensive and slow way to access business information.

Enterprises today cannot afford to ignore 90 percent of their data. Thankfully, capturing, analyzing, and making information available for use across the enterprise, its customers, and its supply chain is becoming easier as IT infrastructure evolves into flexible pools of resources that can be orchestrated intelligently. Hardware-defined silos are giving way to software-defined functionality on standardized hardware. Not surprisingly, the large public clouds have made the greatest progress on this front. Their hardware infrastructure typically consists of large numbers of low-cost servers. Services are delivered from these server pools, but the orchestration of workloads, data storage and data management is all done through software running on servers in the same resource pool.

Most enterprise data centers still face obstacles to seamless orchestration of IT infrastructure to support the capture, analysis and distribution of information supporting the business. Over the past decade, the majority of servers comprising the compute resources have been virtualized. Abstracting the functionality into software has greatly improved the ability to provision or de-provision compute capacity as application requirements change over the course of a day, week or product cycle. However, storage and networking resources are often still provisioned from relatively inflexible hardware silos, and vendor-proprietary management interfaces further complicate the process of flexibly delivering resources as needed.

The New Path: Software-Defined Data Center

You might wonder why the industry is suddenly abuzz about the move toward the software-defined data center. It is quite simple actually: it is the single most critical building block for intelligent infrastructure. Let’s explore why.

Continued progress toward software-defined functionality across all resources tiers will remove some of the traditional IT obstacles. Understanding the importance of this capability, Brocade, Cisco, Juniper and VMware have all recently acquired software-defined networking technology, and numerous startups are also bringing product to market. Similarly, software-defined storage has seen large advances over the past year with products being introduced by both established and start-up vendors. Abstracting functionality into software facilitates the flexible provisioning of IT resources against dynamic workloads. It makes common services and management APIs across heterogeneous hardware possible, allowing pooling of previously siloed resources and the introduction of intelligent management across those pooled resources.

This does not mean that software-defined data center platforms necessarily require racks of identical hardware to enable unified management. Given the extensive existing infrastructure, many software-defined enterprise data centers will not look like the converged infrastructure of the public cloud. While Nutanix and Simplivity, as well as VMware’s rumored “EVO” product, claim to deliver a converged platform on industry standard servers (as long as you lock into one hypervisor and one server platform, that is), EMC’s ViPR software-defined storage platform is primarily designed to provide a common, centrally managed interface across incompatible EMC storage appliances. While the ViPR architecture will not deliver the cost benefits of public cloud architecture, it does serve to facilitate workload orchestration. More forward-thinking vendors provide the ability to abstract storage from hardware arrays, as well as enable converged infrastructure, hence facilitating unified management of data centers that incorporate both types of hardware. This is a huge benefit, as it does not require a “rip and replace” approach to existing hardware, and eliminates the cost and risk inherent in that approach. True convergence requires support and workload orchestration across multiple platforms and hypervisors.

The Key Enabler: Intelligent Orchestration

Management tools such as CloudStack, OpenStack, System Center and VCenter Orchestrator become more effective when a fully converged infrastructure is in place. Integrations with software-defined data center stacks enable the more complete automation of workload provisioning. These tools already provide some capability to ensure availability and redistribute workloads when capacity constraints are detected. However, I expect that more advances are on the horizon, including the ability to move distributed workloads based on infrastructure cost, anticipate workload capacity requirements to pre-provision and manage the response to security threats.

Effective management of geo-distributed workloads will be a key element of intelligent infrastructure. Orchestration tools today have some capability for geo-distributed workloads management, but often with significant limitations and restrictive requirements for hardware deployed at each location. The intelligent infrastructure enterprise of tomorrow must be able to have data proximate to users, distribute workloads across enterprise data centers and public cloud resources based on capacity utilization and dynamic cost analysis, and facilitate collaboration among globally distributed team members. Abstracting the data center into software greatly improves an enterprise’s ability to deploy consistent resources and management tools across distributed data centers that may be built on heterogeneous hardware platforms.

It is also important to look at data management itself in geo-distributed architectures. Content distribution network (CDN) technology such as Akamai is well developed for certain classes of workloads, but many core enterprise data sets still reside in one primary data center, perhaps with a copy at a disaster recovery site. Intelligent enterprises need platforms with active-active, geo-distributed volumes and/or file systems to span workloads with dynamic data sets across locations to get the most from their data.

While the current path toward a fully intelligent infrastructure is incomplete today, it is changing quickly. Enterprises that are early adopters of advances in the software-defined data center evolution will chart out a best-practice intelligent infrastructure approach for their peers to eventually follow. Understanding that the real value will be found in the ability to capture and use widely varied and rapidly growing data sets, intelligent infrastructure early adopters will reduce overall enterprise cost and improve their offerings across every service and product.

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.