Agile, On-Demand and Commitment Cloud Prices

Published: by

Yesterday, I worked with a colleague to determine costing for their newly deployed kubernetes cluster on AWS (Walmart must not be a customer...). The math was mostly straightforward:

  1. Get cost of instance by size, multiply by number of instances and 720 hours per month;
  2. Add EBS block storage;
  3. Add ELBs;
  4. Add data traffic out;
  5. Add S3 storage.

Repeat for each environment, and you have your answer.

By far, the biggest cost line item is the first: instances. They also have the biggest variety, from t2.micro at just $0.0059/hour all the way up to p2.16xlarge for $14.40/hour.

One way to reduce costs is AWS commitments: sign up for a year, get around 30% off the cost; commit to 3 years, save more; pay part or all up front, save even more.

At first blush, those commitments look like a great deal, especially if you know you will be operating for a year or more. Who wouldn't want to save 30-40% of computing costs?

Why not commit, then?


In software development, we have moved from waterfall-developed monolithic apps deployed on a monthly or quarterly cycle to agile-developed microservices (or nano-service functions) deployed on a continuous basis. We changed how we work because it helps us adapt quickly to a dynamic environment at a lower risk and lower cost.

The fundamental problem with waterfall isn't that it is big and heavy (although that is true as well). It is that it is rigid in a fluid world. We simply have no way of knowing, up front, what the world will look like in 6-12 months or more, and how our service will need to adapt. For that matter, we cannot even predict reliably what the application we build will look like.Waterfall makes a lot of assumptions and then commits to those assumptions.

Software development simply is too complex; it requires iterative adjustment. 

When we start a new deployment, we often make the same "waterfall" mistake: we assume we fully understand the needs of our complex system before it begins real usage, and we then commit to that assumption.

In order to remain agile and adaptive, we need to avoid those commitments, at least at the beginning, when we assume but do not know what our needs will be.

One big advantage of containerized microservices is that we need not have the largest instances available, just big enough to handle multiple services. Since the application is separated into these microservices, we need account only for the most demanding individual microservice, not the grand sum. When the total resource requirements exceed total available resources, just add another instance (or let automatic scaling do it for you).

In the end, what was my advice to my colleague?

  1. Start with on-demand only.
  2. Keep close track of average and maximum resource requirements over a period of several weeks, both by host and by microservice.
  3. Once you know, from real experience, what the requirements are, then make the commitment for a normal size instance.
  4. Do not leave lots of room to scale the services up. Instead, prepare to scale them out by adding more instances. Scaling 3 instances of size m4.xlarge to 3 instances of m4.2xlarge doubles your bill; adding a 4th m4.xlarge adds 33% to your bill.
  5. If your application cannot scale horizontally, take a hard look at the design.

How can you apply this to your use case? The principles apply everywhere; the devil is in the details. Ask us to help.