The Future of the Data Center in the Cloud Era

Analyst(s):

Summary

CIOs and IT leaders should not be migrating everything toward cloud services, nor should they be sitting back and waiting for the market to settle. A prudent data center strategy incorporates the best of both worlds, for the right reasons, at the right time. Here’s how.

Overview

Key Findings

  • Cloud services will evolve into an integral part of all IT strategies.
  • A multicloud strategy will become a common strategy for the majority of enterprises.
  • On-premises or enterprise-owned data centers will continue, but applications and business demands will determine where compute resources come from.

Recommendations

  • Begin creating an agile, hybrid data center by incorporating simplified provisioning and elastic services where possible.
  • Use pace layering to segment workloads and determine their optimal future platform.
  • Treat cloud service providers as you would any external service provider — focus on services, service levels, availability goals, incident resolution and bypass, not just pricing.

Strategic Planning Assumption

A multicloud strategy will become the common strategy for 70% of enterprises by 2019, up from less than 10% today.

Analysis

With the proliferation of cloud services over the past three years, IT leaders have begun to ask Gartner a new set of questions regarding their data center strategy. Historically, data center strategies focused on keeping applications running, providing sustained and controlled growth, and doing it in a secure and fault-resilient manner. Long-term strategies were often developed in conjunction with aging data center assets and the need to find more floor space, to bring in more power and cooling, or to replace an aged facility.

The introduction of potentially low-cost cloud services, coupled with ever-tighter controls on capital spending within IT organizations, and the ever-increasing demands by business units for new services have driven IT leaders to rethink both short-term and long-term strategies. The question is no longer, “Should we use cloud services to support the business?” but “How and when can we use cloud services to empower the business?” By focusing on applications, workloads, risk and the short- and long-term needs of the business, a flexible data center strategy can emerge.

Applications

Gartner applies the term “pace layers” to the evolution of applications in an organization to mirror the concept of “shearing layers,” developed by Stewart Brand in his 1994 book, “How Buildings Learn” (see “How to Develop a Pace-Layered Application Strategy” ).

Architectural layers have different “paces” of change, but they must be designed to work together so that the building can function effectively. We believe this same idea of pace layers can be used to build a business application strategy that delivers a faster response and better ROI, without sacrificing integration, integrity or governance.

Gartner has defined three categories (or layers) to distinguish the various business capabilities (and the corresponding applications) that a company needs to effectively deliver its business strategy, and to help IT organizations develop more appropriate application strategies:

  1. Systems of record — Usually found in business capabilities with a clear focus on standardization and/or operational efficiency, these are often subject to regulatory/compliance requirements.
  2. Systems of differentiation — Typically related to business capabilities that enable unique company processes or industry-specific capabilities, these sustain the company’s competitive advantage.
  3. Systems of innovation — New applications that are built on an ad hoc basis to address emerging business requirements or opportunities, these involve an experimental environment for testing new ideas and identify the company’s next competitive advantage.

These layers or classifications can also be used as a baseline for determining application placement. Systems of record, for example, are those that are core to the success of the business and have been in place for many years. These systems are perceived to have immediate business impact if they fail (e.g., loss of revenue, critical process flow, risk of injury) or are highly regulated. In a high percentage of companies, these applications will continue to run on-premises or in a colocation or outsourced environment, with a focus on operational efficiency, high availability, standardization and compliance.

Systems of differentiation, on the other hand, provide competitive advantage, but could potentially be run either on-premises or hosted elsewhere (publicly or privately), depending on latency requirements, unique processes (e.g., data dependencies), service-level requirements, and upward and downward resource scalability. In fact, there may be no specific reason to run these applications on-premises, beyond “That’s what we’ve always done.”

Many newer systems of innovation could potentially be run in a public cloud environment, taking advantage of rapid deployments, processor and storage elasticity, and the financial benefits of right-provisioning. With many new mobile applications, customer adoption rates are unpredictable, and being able to rapidly provision new resources as needed (rather than prepurchasing them) is a huge benefit to capital-constrained IT organizations.

We also expect to see applications move among these layers as they mature, or as the business process shifts from experimental to well-established to industry standard. As an example, a highly successful cloud-based mobile application may become critical to customer satisfaction, which would require a more rigorous change process or performance guarantees, and might require that application to move toward a different layer, and possibly a different location.

Workloads

Workload location is dependent more on latency and workflow than on physical location. Enterprises that have gone through data center consolidation projects have often found that solving the problem of too many sites may have reduced their operating expenses, but have also opened up severe performance issues around workloads and workflow. Creating regional sites for geospecific workloads or sharing workloads across multiple sites to create the perception of 100% service continuity is more important to long-term business health than consolidation for consolidation’s sake.

Placement of these workloads does not have to mirror the sites they came from, either. Using hosting, colocation or cloud service providers are all viable strategies, depending on what problem you’re trying to solve. Many colocation providers are also hosting cloud providers at their sites, which can open up unique options for data center managers looking to leverage newer services. Implementing a platform as a service (PaaS) might have been considered higher risk in the past, but if that PaaS provider also resides at your colocation provider’s site, then contracting for a cross-connect to that PaaS provider from your suite can save significant network costs while providing a simple vehicle to begin implementing more cloud-based services over time.

Risk

While the industry as a whole talks about the value of new services, new delivery options and rapid change, many IT leaders are obligated to add a dose of reality to the pace of change and focus on practical matters. Yes, it’s important to enable the business through rapid adoption of new technologies and delivery models; but at the same time, it’s critical that IT leaders protect the business against actions that might unnecessarily impact the company’s reputation, expected service continuity or organizational efficiency.

Adding external services to your data center portfolio can be an effective means of mitigating risk, while at the same time moving toward a true enterprise-defined data center (EDDC) — one in which the physical location of assets is less important than the services delivered and service levels received (see Note 1). Different application types will reside where the delivery model best supports client expectations, risk, compliance, service continuity and regulatory issues.

This model moves IT away from the traditional, on-premises, full-control model toward a distributed computing model, which in turn changes the way IT must support those business applications. IT staff with skills to assess multiple interrelated technologies (versus vertical technology stacks) will become critical in determining performance and configuration, and tracking key costs and metrics. However, with this model, IT leaders who are a bit risk-averse can use the EDDC as a means to begin a managed migration toward external and internal IT service delivery over a time frame that fits their specific business needs.

 

 

Contact Us Today!