We hear it all the time: Kubernetes is great, but it’s complicated. But the consensus is that despite the complexity, Kubernetes is worth the effort. We recently had a panel discussion with Fairwinds and Buoyant, creators of Linkerd, a service mesh for Kubernetes. We discussed our real world experience with pitfalls and success stories with Kubernetes based on our experience in this space, answering questions from attendees.
The first thing you need to do is recognize that some of the Kubernetes documentation for people who are just getting started is very iffy. You really need to go and talk to some people who’ve done this before and start learning some of the gotchas. For someone who’s brand new to Kubernetes, get a firm understanding of containers in general and how they work. A lot of times, the danger with abstraction is that it leaves you vulnerable when things go wrong because you don’t understand that they are going wrong or why. And things do go wrong.
The second recommendation is that you need to think about the ingress problem as well as security within your cluster and observability. These can be harder than you expect, so go in and look into the tools that are available for that. Don’t try to go and do all of that on your own.
One important thing to understand is the concept of being able to run multiple replicas. Sometimes, people forget that running and being able to scale those replicas up as you have traffic is one of the big capabilities of Kubernetes. We’ve seen people struggle with this, either because the app isn’t configured to run in parallel to begin with or because when you do run them in parallel you need to think about how many you could do without — it isn’t a 1-1 relationship.
Another important thing to consider is setting pod disruption budgets and thinking about how you want to scale your workloads up as you have traffic fluctuate. It’s also important to have an understanding of how to set your Horizontal Pod Autoscaler (HPA) and how HPA works with Vertical Pod Autoscaler (VPA).
You also need to consider how you’re going to handle load with your application, including what kind of load you expect and which metrics you care about in terms of determining your load. Then you need to think about how you want to scale and what kind of resources your workloads are going to need. In terms of the details of your workload itself, there’s a lot more that you need to consider when you’re deploying into Kubernetes. A great starting place is to review the 12-factor methodology of application development to see how your application stacks up and get ideas of what to change or improve to better your chances of successful deployment to Kubernetes.
There are a lot of great resources. For example, learnk8s.io has some really good low level information. YouTube is a huge resource for learning about containers . There are well-established YouTube channels that dive deep into these topics. When everything else fails, pay a little bit of money for a course with a trusted resource. It’s worth it to get a really good baseline on the topic.
There are lots of experts out there that offer services or have learning materials available. And if you have the budget for it, hire a company to run your clusters for you. It will make it all a lot easier. And a tip: if your goal is to accomplish something with a Kubernetes platform, don’t go and look at the source code. Look at it from the user perspective and figure out how it all works.
If you look at Kubernetes as a whole, it’s massive. And yes, it’s complicated and it’s powerful. There are a thousand knobs everywhere. There are a million open source and paid projects that work with Kubernetes, alongside Kubernetes, on top of Kubernetes, underneath Kubernetes. In other words, there is a massive ecosystem, but there are also a lot more resources for the happy path than there were even just a few years ago. There are very good managed service offerings that have a lot more functionality built into them, so you don’t have to do it yourself. So while it’s complicated, there are more straightforward paths through it than there used to be.
As practitioners, we take a lot of the complicated knobs in Kubernetes for granted. There’s a lot going on under the hood, such as all the things that API servers are doing and all the controllers that are running. It can be kind of overwhelming. Kubernetes is definitely quite complicated under the hood. And a lot of educational resources say “containerize your app and throw it into a deployment and voila.” That’s not really helpful advice, because there’s a lot to think about to get your apps and services running in a cluster reliably and securely and safely. Figuring out what kind of resources you need, how to expose your application properly, all the security aspects that you need to set — putting all of those things together can be pretty complicated.
We have played in all the sandboxes, running services for our customers. We’ve always been big fans of Google Kubernetes Engine (GKE) in the past because they generally had newer features faster because they wrote Kubernetes in the first place, but that’s not really the case any more. When weighing between cloud providers, it’s not the Kubernetes offering that differentiates them anymore. They’re all Kubernetes and they’re all relatively good enough. Your decision now should be based on the ecosystem that you want to be in. What other cloud services do you need in your environment? You may not only need Kubernetes.
For example, if you’re a .NET shop that has an active directory setup already, maybe you should go to Microsoft’s Azure because you can port all that over and take it with you. It will integrate more seamlessly. For some, there’s value there if that’s the ecosystem that you’re playing in. Red Hat OpenShift is similar — if you are familiar with and use that ecosystem, OpenShift may be a great place for you. Some of it also depends on who your company already has a relationship with, where you can get free credits to run your K8s. For example, if you have a million dollars in Amazon Web Services (AWS) credits , it makes sense to use AWS as your cloud provider.
If you’re coming at this question from an application developer’s perspective, pick a cloud provider that works for you and use it. Do not try to go off and try to be a Kubernetes infrastructure manager on your first day. But the next question is: how are you deploying into Kubernetes? How are you configuring your deployments and the things that you’re installing in Kubernetes? This is a much bigger problem, because the defaults are not awesome.
One thing you can do is pick other people’s brains and use their experience as the baseline for a lot of your information. There’s also a lot of value in standardizing across your group of people and deciding how you think it makes sense to do it. And you will also learn a lot by trying it and finding out what happens. Ideally, you’re not doing it with your whole production workload, but it is important to get experience working within the Kubernetes platform for your particular workloads.
Many enterprise organizations are deploying workloads into production today, with plans for more in the months and years ahead. If your team is ready to implement Kubernetes and other cloud-native technologies, research cloud native solutions that provide guardrails to help you get started and deploy your apps and services using Kubernetes as your underlying platform.
*** This is a Security Bloggers Network syndicated blog from Fairwinds | Blog authored by Stevie Caldwell. Read the original post at: https://www.fairwinds.com/blog/k8s-experts-pitfalls-success-stories