A cloud computing market offers value in two main ways. The first is by offering a diverse catalog of services, differing in memory, CPU, IOPS, reliability, security, compliance, latency, and so forth, so that a customer might identify a best fit for their unique needs. The second is by enabling price comparison among comparable offers.

The value offered by a cloud market in the first case is the extent to which it provides at least one service matching a customer’s requirements. In the second case, if there is a well-known, consistent price leader, there is no particular value to a market. However, although results vary based on who is conducting the survey, the best price-performance may not be intuitive. Moreover, periodic price reductions and the rise of dynamic pricing for cloud resources—through mechanisms such as Amazon Web Services’ spot instances—introduce volatility into the market. The least expensive provider right now might not be in an hour, or tomorrow.

For some applications, data gravity effects imply that the costs to migrate a massive amount of data might reduce the opportunity to exploit such short-term pricing dynamics. These costs include cloud data transfer costs as well as hidden costs of delay or missed customer experience SLAs. But, even for these data-intensive applications, a market offering services from multiple cloud providers operating from a single colocation facility could eliminate or reduce data transfer costs associated with provider switching. And, whether colocated or distributed, other workloads that are compute intensive but not data intensive, or even are just cloud storage where cost savings can exceed data transfer costs, can potentially benefit. What is the value of a market under such assumptions?

While there are practical concerns such as provider reputation, compliance, personal relationships, developer skills, and technical considerations, the theory of order statistics helps to quantify the value of such a market. To keep things simple, let’s assume that there are n providers, each providing similar or identical offers with prices that vary dynamically (and are uniformly distributed) between $0.00 and $1.00 for some resource, say, per hour. By locking in to a single provider—any one of the n—the charge in a given hour might be, say, 32 cents, then, say, 99 cents the next one, then, say, 17 cents. However, the expected charge would clearly be 50 cents, which is the expected value of a random variable uniformly distributed between 0 and 1 dollar.

With a second provider to choose from, when the first provider’s rate jumped up to 99 cents, the other provider might only be at, say, 15 cents.  According to the theory of order statistics, the expected value of the minimum price across n providers—where each price is independent and uniformly distributed between $0.00 and $1.00—is 1/(n+1).

A simple Excel Monte Carlo simulation bears this out.  As the number of providers increases from one to ten, the expected value of the minimum goes from about .50—i.e., 1/(1+1)—to about .091—i.e., 1/(10+1).  Since this is a simulation, actual results (solid blue line) will not exactly match the theoretical (dotted red line).

Cloud Computing Simulation
Expected Value of Best Price as Number of Providers Increases

For a cloud computing market intended to offer quantifiable financial value to customers, there are a few lessons to be drawn from this analysis. First, there must be some similarity between offers, or at least the ability to normalize for price comparison. Second, the workload must be conducive to provider switching. Either it must not have a large quantity of data, or, the data needs to be located in a cloud-neutral location such as a colocation facility offering services from multiple cloud service providers.  Third, prices must exhibit some volatility. Also, provider prices must be independent; they can’t all rise and fall in unison.  Interestingly, such a market need not be very large. Four providers will deliver 60% of the theoretical maximum cost reduction, and nine providers would deliver 80% of the expected cost reduction of an infinitely large market.

One final caveat. This simple analysis utilized uniform distributions and a wide dispersion in prices. If prices ranged from, say, ten dollars to eleven dollars rather than from zero to one dollar, the absolute benefit would remain the same, but the relative percentage of savings would be substantially less. If instead of a uniform distribution, prices vary, but according to, say, a normal distribution with a small standard deviation, savings would not be as large. However, while the degree of such savings may vary, if the cost to participate in such a market is less than the savings, there is a net gain to the customer.

Joe Weinman

About the author:
Joe Weinman

About Telx

At Telx, our team is ready to help you succeed. Our goal is to create the most effective solutions for your business. Within our ecosystem we help businesses create well-connected communities. With 50,000+ direct connections providing lower costs, near zero latency and fewer points of failure, organizations can improve network performance, reduce time to market and tap into new business opportunities to build a competitive advantage.

Follow Us