Kubernetes is an open-source container orchestration platform available for microservices. Kubernetes is helpful when we want to deploy containerised applications, automate management, and scale the applications.
Running a single microservice in a container instead of several processes in the same VM is nearly always safer. Whenever we launch any pod in Kubernetes, that pod is hosted inside the node. There are two types of nodes available in Kubernetes. The first one is the Master node, and the second one is the Worker node. The application code runs inside the cluster in the worker node. The worker node hosts pods and sends the resource report to the master node. As the name suggests, the master node is the master of all the worker nodes, and the worker nodes are the slaves of the master node. The master node and Worker node together form a cluster.
Mainly, Kubernetes engage in the cloud environment, including clusters, containers, cloud and codes. Essentially, combining code, clusters, and the container concerns Kubernetes security. Therefore, characteristics include container security, cluster security, access control, and application security in a cloud environment around which Kubernetes cluster security is constructed.
Comply scanning the entire system frequently to recognise vulnerabilities and misconfiguration to keep it safe instantly. Further, the Kubernetes API and pod container security are essential aspects of Kubernetes security, which is necessary for commercial security solutions, making Kubernetes resilient and scalable and including valuable information.
As we already know, organisations are moving towards the cloud, and here, cloud means Kubernetes. The use of Kubernetes is increasing in the industry. As per the latest yearly survey by the Cloud Native Computing Foundation(CNCF), 83% of organisations use Kubernetes. Among the 83%, 23% of the organisations use more than 11 Kubernetes clusters for production.
As the use of Kubernetes increases, the security issues and concerns related to the Kubernetes cluster also increase rapidly. The Red Hat Enterprise surveyed 500 DevOps professionals to check the Kubernetes security and adoption. That survey concluded the following:-
Various security measures must be installed while creating a defence-in-depth strategy for Kubernetes security. Even the same approach is followed and recommended by cloud-native security. Cloud Native Systems divides its security posture and techniques into four layers. Following are those four layers:
The 4C security model stands for cloud, cluster, container and code. From the start of the development phase till the end of the deployment phase, all four layers we mentioned above provide security to all the steps involved.
Using this 4C security model, we will individually discuss the Kubernetes security best practices for these four layers. Let us discuss the Kubernetes security best practices one by one, starting from the cloud layer.
The cloud layer represents the server infrastructure. Various services are involved when we set up our server on any cloud service provider (CSP). The cloud service provider handles services like managing the operating system, platform, and network, but consumers are still responsible for securing and monitoring the data. See how our cloud penetration testing services can add to your adherence to best practices.
The “Secrets” object is used in Kubernetes for storing or holding sensitive data or information. Here, sensitive data can be any username, password, token, key and other essential data. Secrets help us in reducing the potential attack surface. Pods can access private data because of the flexibility the secrets provide to the pod life cycle. We need to monitor network traffic for network security. The firewalls also identify potentially unsecured traffic. It is essential to encrypt secret resources, so Kubernetes supports encryption.
One thing that needs your attention is the encryption of the etcd configuration file. This is important because this file is stored in a simple plain text format at the API server level of Kubernetes. The benefit of encrypting the file is that even if the attacker gains read and write access to the etcd file, he won’t understand anything. For decrypting the etcd file, the encryption key is required to encrypt the file. Only the API servers have the encryption keys stored within them.
It is recommended to enable firewall rules, monitor active network traffic and enable local ports only. On the control plane, also called the master node, some ports that must be opened are 6443, 2379,2380, 10250, 8472 and some other ports. For the worker node, too, some ports. Those are 10250, 10255, and 8472.
The Kubernetes API access control process involves three steps. The following are these three steps:
Before the authentication operation begins, the TLS connections and network access control settings are appropriately checked. The process of authentication may be complex or complicated sometimes.
For the purposes of communication between the control plane and the Kubelet, internal communication with the control plane, or the connections to the API servers, only the TLS connections must be used. We can use the Kube-API server command line to send the TLS certificate and a TLS private key file to the API server.
For communication purposes, two HTTP ports are used by Kubernetes API. One port is the local port, and the second is the secure port. The local port does not perform any authentication and authorisation of requests because it does not require any transport layer authentication provider or security connection. Therefore, it becomes more critical to ensure that the local port is not accessible or opened outside the Kubernetes cluster.
In addition, service accounts should undergo a comprehensive audit as soon as possible due to the possibility of security breaches, especially if they are linked to a particular namespace and service account tokens utilised for specialised Kubernetes management tasks. Instead of using the default service accounts, each application should have its unique service account.
This section will discuss the methods to keep the Kubernetes cluster safe in detail. Let us study all the ways individually in brief.
Kubernetes provide some basic security to its elements like authorisation. Therefore, to gain access to the cluster components, you need to log in so that your request to access the cluster components or cluster nodes can be authorised. Here, we can use Role-based access control(RBAC). Using RBAC, we can state which entity can use Kubernetes API to carry out a specific activity on a particular resource. It is one of the best and most efficient ways for Kubernetes API authentication to restrict access. It is also recommended to enable audit logging for better security.
For making any authorisation decision, the rbac.authorization.k8s.io API group is used by the RBAC authorisation. It enables policy changes as needed. RBAC activation involves restarting the API server after setting the permission flag to a comma-separated list that includes RBAC.
kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options
Take the example below into consideration. It just offers the pods for reading as a role in the “default” namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: read-user
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
In the Kubernetes Container Orchestration system, the Admission Controllers plugin enforces the cluster’s governance and use. Admission controllers can capture the authenticated API requests and change or reject them. Because of this, they are also called gatekeepers.
Admission controllers can increase security by mandating proper pod security policies and posture across an entire cluster. The built-in Pod Security Policies admission controller is one appropriate example of this context. Among other things, it can be utilised to stop containers from running as root or to guarantee that the root filesystem is always mounted read-only.
In the cluster environment, containers run using container runtime engines. The most common container runtime environment used with Kubernetes is Docker. It also comes with various images that programmers may use to set up anything from Linux to Nginx servers.
To avoid vulnerable images, all the container images must be scanned, and only those that follow or comply with the organisation’s policy should be allowed. This is because organisations are more vulnerable to such security risks. As we know, much of the code is often taken from open-source projects, so it becomes more important to scan the images for security vulnerabilities and follow the network policies set by the organisation. For all the container images, the container registry is the central repository. You should avoid loading unwanted kernel modules in the container image itself.
The private registry should be used as a container registry instead of the public registry because the public registry is less secure than the remote registry. Only images verified with the organisation’s network policies must be stored within the private registry. This way, we can reduce the organisation’s operating system’s attack surface by reducing the utilisation of vulnerable images.
The process of scanning images can also be installed in the CI pipeline. Here, CI stands for continuous integration. Let us see what image scanning is. The image is scanned for CVEs, vulnerabilities, or bad practices during this process. As it is installed in the CI/CD pipeline, vulnerabilities can not reach the registries. It also protects against pipeline vulnerabilities due to the utilisation of third-party images.
One of the strongest arguments favours using root privileges while running containers is the need to prevent privilege escalation. A root user can run different commands on a system, as a root user can run whatever commands in the container system. Command execution becomes so easy for the root users. It makes things difficult for bad actors, even if they somehow gain access.
Installation methods of software packages, creation of users, the launch of services and other such activities need to be adequately examined. This can create different challenges for the application developers. We should also avoid running applications as root users on virtual machines to prevent security risks. The same happens with the containers. No other users except root should have the write access.
If the container is breached, the root user will have access to and be able to run any OS commands on the underlying host as if they were logged in as the root user. This poses a danger that includes:-
Even though we may run various applications in the container, we sometimes refer to the code layer as application security. Additionally, it is the transport layer security over which businesses have the most control. As your apps and their associated databases are typically open to the internet, attackers will focus their attempts on this section of the system if all the other components are protected.
libraries, other third-party components, and open-source packages, on which most application software essentially rely. Therefore, a vulnerability in either of these can certainly influence the complete performance. Therefore, there is a significant possibility that you are currently using programs that contain vulnerabilities.
In this section, we will discuss the implementation of Kubernetes security best practices in different areas.
To execute the containers, various resources are available to every pod comprising the memory and CPU. Therefore, specifying how many resources must be assigned to each container is necessary to declare a pod.
Containers and pods have security contexts, explaining their privileges and access control settings. Discretionary Access Control (DAC) requires authorisation to penetrability an item based on the user ID, an essential term given to the containers.
There is a possibility that a security vulnerability is located later while containers are enduring a variety of open-source software and other software. Eventually, keeping up with software updates and scanning images to see whether any vulnerabilities have been found and patched is critical. Also, there is a possibility of improving the security tool a running application by upgrading to the most recent version on the platform using Kubernetes’ rolling update capability.
Network policies are not immediately applied to the pod by Kubernetes when designed. Pod-containing containers are almost unchangeable, which means that they are dispatched. Alternatively, while being unavailable, they are re-deployed and destroyed.
So, there is a possibility of identifying and settling flaws quickly in the container’s lifecycle by Shift left testing, which allows the management of the container more effectively.
Security in Kubernetes is managed through features such as RBAC(Role-Based Access Control), network policies, container runtime security, and regular updates to address vulnerabilities.
Kubernetes is not inherently a security risk, but misconfigurations, inadequate network access and controls, and improper deployments can introduce security vulnerabilities.
To secure a Kubernetes cluster, implement strong access controls, enable audit logging, rotate infrastructure credentials frequently, use network policy, employ container runtime security and conduct regular security audits.