So as part of my continued journey with K8s I thought I would try out building my own cluster on Raspbery Pi model 4.
This is very similar to my example with Hyper-V in many ways, but there are a few "gotchas" that are worth knowing.
Firstly here's a breakdown of the kit I used for this setup:
I pre-built all of the kit so that the units where ready to accept the cards and be powered up.
As you can see below the setup is striaght forward enough, with one master and two worker nodes. I'm using my home subnet range for this as its connected directly to my LAN on 192.168.1.0/24. Each node is running Ubuntu 18.04.4 LTS (64bit) and the hardware version for each RP is 5.3.0-1017-raspi2.
For information on updating the Raspberry Pi firmware take a look at the official docs https://www.raspberrypi.org/documentation/raspbian/updating.md .
For the pod network I'm using the 10.244.0.0/16 default and running Calico for the CNI. Docker will be version 19.03.6 (latest at the time of writing) and Kubernetes 1.18.3.
Ok lets get down to the fun part. First of all we need to configure the Micro SD cards with the OS. For this example I am using the Raspberry Pi Maker software which you can download from here https://www.raspberrypi.org/blog/raspberry-pi-imager-imaging-utility/ . You can obviously use other software like belanaEtcher etc, but this was fine for my needs.
Now one thing to note is that the latest version fo Raspberry Pi Maker includes Ubuntu 20.04 LTS, but not 18.04.4 LTS, so I downloaded this manualy and chose the "Use Custom" option. You can get the 18.04.4 LTS image from here https://ubuntu.com/download/raspberry-pi .
Use the Raspberry Pi Imager software to write the OS to the Micros SSD cards and install them into the units. Connect everything up including the patch cables, and connect a monitor and keyboard to the first device.
Configuring The OS
The default logon for the image is:
Were going to configure a few componets on the OS so I've listed them below:
To change the hostname we run the following.
We can now check to see if this has been applied by running.
We now need to fix the IP Address. With Ubuntu 18.04.4 we need to edit the netplan configuration. But first we need to retrieve the ethernet details using the below.
You will see a list of the network interfaces available. In my case the ethernet was eth0.
Now we need to apply this to our configuraiton. If you browse to "/etc/netplan/" and run "ls" you will see a default yaml file 50-cloud-init.yaml. But you can edit this directly.
Edit the yaml file to align to your setup, for me this is shown below.
We then need to apply the changes and check they have been applied.
You should now see the IP address assigned to your selected network adaptor.
Because we are using Micro SSD for our storage we need to apend cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory to the nobtcmd.txt.
If we dont make the changes we will see an error when running the pre-flight check in Kubeadm init like this.
We want to install openSSH Server so we can ditch the monitor and keybaord. At this point we are also going to update the package management.
Disable the Swap File
Lastly we need to disable the swap file.
We can now reboot the device.
Make sure you run the same commands for the two other nodes and then move on to Installing Docker and Kubernetes.
Installing Docker and Kubernetes
Now we can go ahead and install the base components of our cluster, this will include:
Lets install Docker and check the version installed using the following.
Next we set docker to run at boot and varify its running/
If its not running the follwing command will start it.
Now we can add the Kubernetes signing key.
Add the software repository for Kubernetes as it is not included in the default list.
Lastly we install Kubeadm, Kubelet and kubectl, disable them from auto updating and check the version.
Repeat the above processes for each node.
Master Node (pi-k8s-master)
Initialise Kubernetes on the master node (in my case k8s-pi-master).
If successfull you will recieve a similar messsage to below.
Ensure you run the three commands listed to configure the kube config.
We also need to keep a record of the sudo kubeadm join command shown in the initialization notice. This provides us with the token and certificate hash requiered for the other nodes.
Next we need to deploy our pod network to allow node communicaitons between the cluster. As mentioned we are using Calico. For more information on Calico see https://docs.projectcalico.org/introduction/.
Download the Calico manfiest using the following.
By default Calico uses a CIDR 0f 192.168.0.0/16, which in most cases will clash on a home setup, so we will edit the yaml file to update to our required range.
Scroll through the yaml and find the area relaiting to the default IPV4 pool and make the ammendment to the CIDR block.
Save the file and then apply the calico.yaml manifest using the following command.
Workers Nodes (pi-k8s-node1 & pi-k8s-node2)
Finally we can join our worker nodes to the cluster. SSH onto each worker node and run the kubeadm join command we noted earlier.
Once completed, a successful message will look like this.
Master Node (pi-k8s-master)
Finally return to the master node and run the following command to list the nodes and ensure they are in a ready state.
The result should show all the nodes in ready state along with the Kernel, OS, Kubernetes and Runtime versions.
The last command shows us all the active pods on the blank cluster.
The result should look similar to this.
Thats it! We should now have a fully functional 3 node kubernetes cluster running on Raspberry Pi 4 hardware.
Go have some fun!
So a topic I get asked about numerous times when discussing Azure infrastructure and security is basic traffic management and which security groups to use, NSGs or ASGs.
So a year or so ago I created a two part recording of how we use Network Security Groups (NSGs) for North<->South control, and Application Security Groups (ASGs) for East<->West control.
If your interested, here are both parts....
I know what your saying “This blog is called Azure for All, so why a Kubernetes article?”, well stay with me on this – all will become clear!
I've recently been conducting more and more technical reviews of the Azure Kubernetes Service (AKS), and one thing that has become clear is that due to AKS being a managed service, some knowledge of how Kubernetes hangs together is not always truly understood. So, I decided to write an article on how to create a K8s (Yep I’m not writing Kubernetes anymore) cluster on Hyper-V for testing and exam prep reasons.
Now, I know you can run Minikube or even Docker with K8s enabled, but let’s be honest here, its not a true production like environment is it? so with this post I am going to guide you through creating your own production like cluster. It may not have all the bells and whistles (we are only going to have one Master node at this point), but its going to give you a better insight into how a cluster should look.
I am also going to give you a few useful links below that I have used before, and still use as reference for AKS and native K8S.
For those of you who have looked at AKS and done your homework, you will know its nodes actually run Ubuntu 16.04 LTS not 18.104.22.168 LTS as we are going to build here, and we are going to install K8s version 1.17.1 again the default version for AKS, 1.13.12. However, remember we are not trying to replicate AKS here, just get a similar environment so we can understand the components.