In a recent Prolifics whitepaper comparing cloud providers using price for performance, we captured some very interesting observations.
Firstly, these providers are all trying to stand out positively as leaders in one way or other. At the same time, they’re also trying to ensure they don't stand out negatively or fall behind. This results in all the larger providers having good, responsive marketing teams, but also means that it’s not unusual for them to announce new capabilities as alphas or betas before the functionality is entirely ready for prime time.
They are also adopting lean and fail fast practices so that they can get functionality out there quicker to receive consumer feedback. This leads to an interesting dilemma. If you become an early adopter, you have a chance to influence the direction the service option takes, but you could also find yourself on a deprecated option if it fails to realize the operational and economic benefits of the cloud provider. Alternatively, if you hold back and wait, you may get less of a chance to influence the direction, and may be unable to help prevent it from being abandoned. Let’s call this the functionality arms race.
Another item that we observed is that all the major providers use what are called “T-shirt sizes” for virtual servers. These usually have three main dimensions. "Compute" means number of virtual CPUs; "storage" means the amount of virtual disk space; "network" means the amount of virtual data traffic (I/O). Some providers have options for bare metal (non-virtualized) storage, and spinning versus solid state storage.
Most providers go for a one-size-fits-all approach with a balance of these factors, but some organizations such as Amazon (AWS) have more options for heavy computing with less storage for computational intensive work loads, heavy storage for data retention workloads, or heavy network services for routers, load balancers, proxy servers, etc.
What they usually don't tell you is that none of them publishes their physical to virtual loading factors and their active workload tolerance. So you can compare sizes and prices all day long, but you still have no idea if they have anything in common.
I would love to be proven wrong, but this is never going to change. Anyone who publishes these factors, no matter how good they are, will be immediately attacked by the organization who don't publish their factors, saying they are better. Let’s call this the performance arms race.
Now there is also a third arms race, which is price. Each of the cloud providers is investing in automation and processes. Some are designing and building their own specialty hardware. Some of the lower-price providers are shying away from much of the hoopla and trying to operate leaner with smaller overhead costs, passing the savings on to the customer.
Cloud is very much a land grab in that the larger customers are buying future business with discounts and incentives today. This makes sense for the ones with deep pockets, since they all make it easy to get onto their cloud, but none of them make it easy to move out. So this invites the question: do you want the provider that is investing in innovation, redundancy, and security, or do you want the one with the larger support team and the better customer service, or do you want the cheaper one?
The answers to these questions are easy. The hard part is working out is where a given provider is on these dimensions. Without trying them all, how do you know if the one you pick is good or just good enough?
Wow, this cloud stuff is complicated. I have not even touched on serverless and "pay by the second" architectures and my head is already spinning. Well, here is my shameless plug: get some help call Prolifics.
About the Author
Mike has been with Prolifics since 2006. He has over 20 years of IT experience covering project management, enterprise architecture, IT governance, SDLC methodologies, and design/programming in a client/server and Web-based context.