Cloud infrastructure has changed so much over the past decade that it’s almost hard to believe. One of my first tech jobs had a server room in a closet and we took care of everything ourselves, from the hardware and operating system to the network and even the CD drives we used to update. update our software. Today, we rely more and more on the clouds themselves, we don’t care about the server below, and we don’t care about networking. There are no physical drives we interact with and our lives are better as a result.
Interestingly, by removing ourselves from so many tangible parts of all systems, we are much further away from the true cost of running this kind of infrastructure. The cloud is to tangible hardware what a credit card is to cash, it doesn’t cost more, it’s just harder to tell where all the money went when we’re not there.
Kubernetes is still a new paradigm
Kubernetes has taken things to the next level. We may have provisioned workloads with automation before Kubernetes, but as the container orchestrator became the way for us to interact with our cloud infrastructure, everything became a bigger black box.
The beauty of Kubernetes is that we can hand it a workload and let it worry about scaling up and down based on demand. The downside is that if we misconfigured our workloads before pushing them to Kubernetes, it’s easy to incur exorbitant costs, because Kubernetes does exactly what we (wrongly) told it to do.
The cost of Kubernetes
Kubernetes itself costs very little to run as it is primarily a control plane performing the orchestration of other workloads, but there is a cost associated with each new paradigm. In Kubernetes, when resource requests and limits are not set correctly, it will either result in spending more than you need (because workloads are overprovisioned and Kubernetes scales things more than necessary) or it will result in a underperformance (as workloads run out of memory or become CPU constrained). It can also lead to over or under-prioritized workloads, as Kubernetes does its best to make sense of what is passed to it.
Without good tools to have visibility into the cost of a cluster or the cost of a workload, it is easy for a developer (especially in an advanced environment service ownership the environment) to over-provision things wildly and so that the platform team doesn’t have the insight to deal with the out-of-control costs until it’s too late and the bill has increase.
Too often I hear stories about a team finding out much later, when they’ve finally received a cloud bill, they have a misconfigured workload, and costs are skyrocketing.
Fairwinds Insights provides the visibility needed to control costs
Fairwinds Insights, a Kubernetes railing platform, provides a single view across all your clusters to see which clusters and workloads are costing you the most. You can also track trends over time to see where things got out of control and how to fix them.
Good tooling makes for large teams. The promise of the cloud is that your spend can actually match your needs. Don’t let things get out of control before you take control of your Kubernetes infrastructure—Fairwinds Insights.
Watch how Clover uses Insights to control costs.
*** This is a syndicated blog from the Security Bloggers Network of Fairwinds | Blog written by Kendall Miller. Read the original post at: https://www.fairwinds.com/blog/find-kubernetes-cost-blind-spots