Building your Kubernets Cluster For Cybersecurity Prototyping
2022-1-18 15:10:35 Author: marcoramilli.com(查看原文) 阅读量:19 收藏

Kubernets and server-less applications would be the biggest next things to protect. So it would be a great idea to start to get practice on such environment, especially if you had no previous opportunities. Here my post on how to build your first kubernets cluster based on Raspberry Pi4 ! Raspberry is a cheap and very flexible platform which can be used in several ways and if you have some of them from older projects you might decide to reuse that hardware, building your first kubernets cluster almost for free. If you follow me during my past years you probably know that my very-first mantra (before the “defense belongs to humans”) was: “learning is a personal experiences, everything else is just information”.

Kubernets Cluster Final Assembled Hardware

So let’s start to build our own cluster for testing and practice ! This post is organized in the following sections:

  • The Project. A general view about used OS, network layers, communication flows and rules.
  • The Hardware. Bill of Matherials. What I did used to reach the result.
  • The OS Installation and Configuration Process. Main instructions to accomplish the basic configuration set.
  • Kubernets Installation and Clustering. Simple process to installing kubernets over microk8s and clutering configuration

The Project

I decided to build a 4 nodes kubernets cluster. One of the node would be the master and all the nodes shall communicate to internet through the master node. In such a way we might control node communications and it would be more easy sniffing and capturing network traces. So we need to configure a network layer. Raspberry has got two network interfaces, an Ethernet interface and a WiFi interface. We will use the only Ethernet interface on 3 nodes (workers) and both of the interfaces on the master node which will be acting as a network bridge as well as master kubernets node. The following image shows what I mean.

General View

Each worker node belongs to 10.100.100.x/24 sub net having as default gateway the 10.100.100.1 which is the network bridge. 10.100.100.1 has two configured interfaces: the Ethernet interface having as address 10.100.100.1 and the WiFi interface configured through wpa-supplicant with DHCP client activated. This node needs to have the flag net.ipv4.ip_forward set to 1 in order to redirect traffic from one interface to another one. NODE1, NODE2 and NODE3 are connected to the MASTER building the kubernets cluster.

The Hardware

One of the best case, which I would definitely recommend is the UCTRONICS. Maybe it’s a little bit pricey, but if you think that its a fully metal and it has a dedicated spot for an external storage, well … I believe it’s a nice take for your brand new cluster. Beside that here my bill of materials:

  • 1x Metal Case uctronics. Available HERE.
  • 4x SanDisk MicroSD. Available HERE.
  • 1x Switch PoE. I decided to go through Cudy. It’s simple, its PoE, it’s very cheap and it actually works ! Available HERE.
  • 4x Ethernet cable. I hate having too long cables, I do love minimal setups and having “short enough” ethernet cables would definitely help in that. So I bought them from HERE.
  • 4x Raspberry Pi4. There are many places where you can buy them. I made a single basket on Amazon, so I bought them HERE.
  • 4x Raspberry PoE Addon. It would be super difficult to setup 4 power cords in a single metal case and it would be very bad looking all the cables coming out from our cluster. So I decided to power Raspberry Pi through Power Over Ethernet. You can decide to skip this expensiveness as well if you are ok with many power cords. In case you would opt for a clean setup, I used these boards (DSLRKIT Power Over Ethernet) HERE.

Once everything is complete this is the result. Pretty clean, professional and good looking, I like it !

Os Installation and Configuration Process

The first step is to install an operative system on your raspberry pi4. Fortunately Raspberry made a great project (available HERE) in building its official image burner: rpi-imager. By using it you might select your favorite OS and burn it into the micro-sd card totally painlessly. To install it, it’s a piece of cake if you are on apt platform, you can simply run sudo apt update && sudo apt install rpi-imager.Once installed and run, go to “Choose OS” -> “Other general purpose OS” -> “Ubuntu”. I selected the Ubuntu 20.04LTS but you are free to use your favorite version and distribution.

Burning your OS

Once you have burned your micro-sd card it’s time to configure it before firing up your raspberry. You have to mount your micro-sd card and change some files.

Open your network-config file, you need to change some configuration before firing up your hardware. This is the place where we need to configure our network layer according to previous section. For each raspberry you need to burn the micro-sd card and configure the network layer. On NODE1, NODE2 and NODE3 the network-config file is quite simple as follows:

ethernets:
  eth0:
    addresses:
      - 10.100.100.2/24
    gateway4: 10.100.100.1
    nameservers:
      addresses: [8.8.8.8, 4.4.4.4 ]
    optional: true

Each NODE has a different IP address (please refers to Project image) while all of them have as a gateway the MASTER node (10.100.100.1). While nodes have similar network configuration, the MASTER node has a different network layer since it needs to act as a bridge between two interfaces. The following section shows my configuration as example. Please remember to change your access-point SSID and you network password to fit your configurations.

ethernets:
  eth0:
    addresses:
      - 10.100.100.1/24
    nameservers:
      addresses: [8.8.8.8, 4.4.4.4 ]
    optional: true

wifis:
  wlan0:
    dhcp4: true
    optional: true
    access-points:
      "home network":
        password: "123456789"

On each raspberry you have to add the following configuration to the file cmdline.txt before firing it on. cgroup_enable=memory cgroup_memory=1. The whole line would be like the following one:

cgroup_enable=memory cgroup_memory=1 net.ifnames=0 dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

On the MASTER node you need to enable ipv4 forward. In order to to that, you have to boot your raspberry on the just created sd-micro card. Once it booted up (it would take a minute or so for the first time), you might want to ssh into it and run:

sysctl -w net.ipv4.ip_forward=1

NB: If you do not known the MASTER node external IP address (the IP address who gets through the WiFi interface on which a DHCP client is configured) I would suggest to use

arp -na | grep -i "b8:27:eb" or arp -na | grep -i "dc:a6:32"

and later ssh [email protected]<ip address>

Kubernets installation and Clustering

Now it’s time to boot all your OS (burned on your micro-sd cards) on every raspberry pi. You could connect your four pi4 directly to your PoE switch in order to start smoothly and safety.

Now you should connect to your MASTER node through ssh on its external interface. This interface would have a DHCP client so you need to figureout its IP address. Once you are logged into your master node you would easily reach and connect to other NODEs by ssh over the private IP addresses (10.100.100.1/24). I would recommend you to use a tmux within multiple tabs opened within different connections to specific NODEs, in such a way you can quickly run multiple commands on multiple devices. The following image shows the tmux I used to run commands on the cluster internal nodes.

tmux with multiple connection on raspberry pi4 cluster

Before installing kubernets on your nodes I do suggest to change the hostname of every involved machine. The NODE ‘s name would be equals to the hostname. Since you have just burned a vanilla ubuntu, every hostname are set to the same string (probably ubuntu). It will mess up within the cluster having multiple nodes with the same name. So change hostname by typing: sudo vi /etc/hostname and write there your favorite string. In my case I will use NODE1,NODE2, NODE3 and MASTER.

Now it’s time to install kubernets. The best choice for micro devices is to have a super minimalist installation (you can increase the footprint by installing packets, plugins whenever you want ;). So let’s start with the minimalist footprint possible to keep things clean and well managed. For reaching this goal Microk8s does it jobs !

Microk8s is a small, lightweight, and fully conformant Kubernets distribution from Canonical. It’s a minimalistic distribution focused on simplicity and performance. Given its footprint, Microk8s can be easily deployed in IoT and edge devices.

So let’s type on your NODE

sudo snap install microk8s --classic

Now it’s time to add NODEs to MASTER. On the MASTER node type:

sudo microk8s.add-node for each node you wish to add.

On each NODE (after ssh into it) type

microk8s.join <IP>:<PORT>/<TOKEN> (just an example: microk8s.join 10.55.60.14:25000/JHpbBYMIevZSAMnmjMHmFwanrOYCWZLu).

Finally on MASTER node you can type: sudo microk8s.kubectl get node you will see the nodes connected to you MASTER. NB: the node names are the host name of the remote machine.

Now, it would be a nice idea to install the kubernets dashboard in order to better control deployments, resources and nodes. Before adding the dashboard I would suggest to enable the internal dns service. To do that type on your MASTER : sudo microk8s enable dns and after that by typing sudo microk8s dashboard-proxy microk8s would run the internal dashboard service and it would mirror port 10443 on all the IP addresses on the master node. From that point you can connect to the dashboard on you external ip address. The command sudo microk8s dashboard-proxy will give you as an output an access token (a quite long string). You have to copy it and use it to login on your dashboard.

If you want to get a persistent storage, now it’s time to configure it. In “kubernetics” you have the chance to define a storageclass which defines the storage quality (ie. fast, slow, high capacity, etc. etc) and later on the volume which sits on a specific storegaclass. So what you need to do first, is to define your storagelass.yam The following code shows a simple storageclass named fast-ssd provisioned by the dafault microk8s storage provider (activated by sudo mirok8s enable storage)

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

By typing sudo microk8s.kubectl apply -f storageclass.yaml you will create your first storageclass. Once you have it, it’s time to create your volume. The following file ( persistent-volume) creates a simple volume to be mounted from your pods. In this example I am defining a local volume named master-local-volume sized 100GB having the storageclass named fast-ssd.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name:  master-local-volume
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 100G

By typing sudo microk8s.kubectl apply -f persistent-volume.yaml you will create your first volume. Having your dashboard enabled through sudo microk8s dashboard-proxy you should see your volume on the defined spot on dashboard. The following image shows what I meant.

Storage and Volume set-up

Adding modules and applications

If everything was fine you have just setup your first minimalist kubernets raspberry pi4 cluster. Now you might decide to do a lot of “cut and paste” of .yaml files for bringing up to live the applications you need OR you might decide to use a common and great repository. helm make your day ! If you don’t like to re-invent the wheel and if you want to quick deploy and undeploy applications , I would definitely suggest you to go through it. Fortunately microk8s perfectly fits helm so that you can easily deploy it through pre-configured commands. To deploy helm on your cluster simply type the following commands:

microk8s enable storage
microk8s enable dns
microk8s enable ingress
microk8s enable registry
microk8s enable helm

From now you are free to access helm command line interface through sudo microk8s.helm and quickly deploy your application through app-manager.

Bonus Section: Temperature

Running a kubernets cluster over raspberry Pi4 (ARM processors) is quite challenging for your little boards. Temperature would increase very quickly especially on the MASTER node which needs to act as bridge as well as a MASTER kubernets cluster. Once temperature gets high performance gets low. So I recommend to add few air cooling system to your main board. The following image shows what I used as cooling systems. Three radiators on the main chips (the most important are on CPU and Ethernet ship)

Cooling Systems on Rapsberry Pi4

Conclusions

Kubernets networks are widely used in big corporate such as banks, insurance institutions and even on government environments. It’s clear, at least to me, that it would be the next (or even actual) “big thing to protect”. How can you protect something that you don’t know in details and in its processes ? This post want to set first steps into this direction, letting you the possibility to build your first and very cheap (and eve slow…) kubernets cluster for testing purposing.

Have fun !

Ethical Hacking, Advanced Targeted Attack Expert and Malware Evasion Expert


文章来源: https://marcoramilli.com/2022/01/18/building-your-kubernets-cluster-for-cybersecurity-prototyping/
如有侵权请联系:admin#unsafe.sh