October 9, 2019 admin

Benchmarking: a must for cloud computing providers

Cloud providers have to benchmark their service levels, because their clients are.

20101418_MCloud computing is surging. From almost nothing five years ago, cloud computing is now so prevalent that companies with more than 1,000 employees spent, on average, more than $6 million on the cloud last year. Seventy-one percent say they plan to increase spending by more than 20 percent through the coming year.

With this kind of investment, companies tend to measure performance carefully to ensure they’re getting their money’s worth. This means companies that provide cloud services, colocation and connectivity have to know the level of service they’re providing, a process known as benchmarking.

Cloud service quality standards

Cloud providers often guarantee service standards, such as 99.999% uptime. Guarantees against slowdowns in applications or response time to resolve incidents are also common, often with a premium fee. Before you make this kind of offer, however, you have to be certain that you can meet the standards.

Benchmarks measure the performance against industry standards, as well as the expectations and needs of your clients. The key benchmarks are uptime, or availability of cloud data, applications and resources, and lag, which is the time delay between issuing an instruction and the cloud’s execution.

Clients also are demanding easy access to cloud resources from the entry point of their choice. To be competitive, cloud providers need to provide clients access from standard web browsers, lightweight browsers, mobile tablets and remote applications.

One other parameter that is extremely important to cloud computing clients is security. Many different industries have obligations on security and privacy for their clients, and that means the cloud provider has to meet those, as well.

One way to meet these requirements is with an independent security audit, such as ISAE 3402 and SSAE 16. Certification to one of these standards will reassure your clients that you’re able to protect their critical data and resources.

Tools

The big cloud services, like Amazon’s Web Services (AWS), Microsoft’s Azure and Google Cloud offer their own built-in tools to measure uptime, response and lag. But of course, many people want independent measures.

  • Gartner’s CloudHarmony platform compares performance of various cloud providers.
  • IPerf is an independent tool that measures maximum achievable bandwidth on IP networks, and reports bandwidth, loss and other parameters.
  • Geekbench 5 is a cross-platform tool to measure system performance
  • AppNeta is a monitoring platform that measures uptime, downtime, application slowdowns, latency and more.

How to benchmark

  • Define the project — Each client has different needs, and that means different benchmarks and measurements will be important to each one. Help them determine the kind of computing they need to do in the cloud — downloading from a database, performing intensive computations or sharing files — which will determine the standards and metrics.
  • Decide on desired performance levels — Depending on which procedures are most important, and which have the greatest impact on achieving business goals, help the client set the needed performance levels for uptime, lag and response to issues.
  • Monitor in real time — Help the client set up the dashboards and alerts to monitor performance on the metrics that they need.

Understand cloud performance

Setting the right benchmarks, and then making changes to adhere to them or exceed them requires clear understanding of computational performance in the cloud. These really come down to a few key parameters.

  • CPU, RAM and storage response: In a cloud environment, CPU, RAM and disk storage are in demand from more than one customer, all the time. This means that tasks or demands from different clients or users of the cloud get queued. The slower the response and performance of these resources, the longer the queue gets, and the slower the experience for the client. This is something the cloud provider needs to monitor, as it drives customer satisfaction.
  • Number of cores a virtual machine can access at one time. More cores to access at one time allows the cloud to spread the computational load. Cloud providers can achieve better performance by matching the number of threads or cores that an application can use to the number of cores that the virtual machine can access. However, that’s not always possible given the limitations of virtualization.
  • Storage is usually the bottleneck and limiting factor on performance of most cloud computing, because of the time required to read and write to a physical disk.
  • Location of resources: Latency is mostly a factor of the distance between the client and the physical cloud server. Communication from a client in, say, New York to a provider in, for example, San Francisco cannot go faster than the speed of light. The time can be extended by repeated calls and responses between the client and server, and by a circuitous route along the way.

Which is most important?

The answer depends on the client’s needs: speed, reliability, throughput; how much upload versus download, read versus write, and the number of users.

Take the time at the outset to determine the client’s real needs and expectations. And remember to turn to us at Broadline Solutions for the support and information you need to add value for your clients.