Kubernetes - Install Kubernetes on the master node#
468 words | 6 min read
In the previous post, we configured our router to use BGP for Kubernetes.
Now, we will install Kubernetes along with its prerequisites, on all 3 nodes.
There are multiple Kubernetes distributions that we can choose from. Among them, k8s, kind and k3s.
I managed to get k8s running on my cluster before I decided to try k3s. Eventually, I decided to continue with k3s because it consumes fewer resources, and provides a lot of Kubernetes functionality with just a single executable (instead of multiple pods that consume cluster resources).
Here are the steps I used for installing K3S in my 3-node cluster.
We will perform the remaining steps as “root” (or prefix each command with “sudo”)
sudo su -
1. On the Raspberry Pi, append the following at the end of the existing options in /boot/cmdline.txt#
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
2. Install prerequisite packages#
A few packages need to be installed on all nodes:
dnf install git tar
3. Update all packages to the latest versions#
dnf update
4. Reboot#
Reboot all nodes
Important
From this point forward, we do not want to reboot (the master node especially) because that will destroy the cluster, and will require cleanup and a repeat of a number of steps to get everything running again.
5. Create K3S config file on master node#
Create a config file (config.yaml) in your home directory with the following contents:
disable:
- local-storage
- servicelb
- traefik
- metrics-server
flannel-backend: none
disable-network-policy: true
disable-kube-proxy: true
etcd-expose-metrics: true
kube-controller-manager-arg:
- bind-address=0.0.0.0
- terminated-pod-gc-threshold=10
kube-scheduler-arg:
- bind-address=0.0.0.0
kubelet-arg:
- config=/etc/rancher/k3s/kubelet.config
node-taint:
- node-role.kubernetes.io/control-plane=true:NoSchedule
- node-role.kubernetes.io/master=true:NoSchedule
tls-san:
- salt
- salt.example.com
# This needs to be an range that's not already used elsewhere
cluster-cidr: 10.42.0.0/16
service-cidr: 10.43.0.0/16
write-kubeconfig-mode: 0644
cluster-init: true
A number of features/components are disabled because we either don’t need them, or will use something else to provide the functionality.
However, I’d like to point out that Flannel and kube-proxy are disabled because we will use Cilium to provide that functionality.
We’re also tainting the master node to prevent regular workloads (pods) from being scheduled on it. We will see how to remove that taint later, if we want to schedule pods on the master node too.
6. Create the Kubelet Config file on the master node#
Create the kubelet.config file in your home directory, with the following contents:
# /etc/rancher/k3s/kubelet.config
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
7. Copy the configuration files to the k3s directory#
mkdir -p /etc/rancher/k3s
cp $HOME/kubelet.config /etc/rancher/k3s
cp $HOME/config.yaml /etc/rancher/k3s
8. Install k3s#
export INSTALL_K3S_CHANNEL=stable
curl -sfL https://get.k3s.io | sh -s - server
Wait for the installation to finish.
Tip
Create an alias for the kubectl command to reduce the typing effort.
alias k=‘/usr/local/bin/kubectl’
8.1. Check the status of the node#
k get nodes -A
You should see output similar to the following:
NAME STATUS ROLES AGE VERSION
salt NotReady control-plane,etcd,master 3m2s v1.30.2+k3s2
The “NotReady” status is expected. So far, everything is working as expected.
8.2. Copy the Configuration to your home directory#
/usr/bin/mkdir -p $HOME/.kube
/usr/local/bin/kubectl config view --raw > $HOME/.kube/config
chmod 0400 /root/.kube/config
9. Install helm3#
Helm charts are often used to install various software (e.g. nginx, mysql etc.) to run under Kubernetes. So, we’re installing it now to have it available for later.
curl -O https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod +x get-helm-3
./get-helm-3
10. Extract and note down the server token#
When k3s is installed on the master node, it creates a file with the server token. This server token is required for the worker nodes to join the cluster.
So, check the /var/lib/rancher/k3s/server/token file and note it down. You can use this command to extract the token from the file.
cat /var/lib/rancher/k3s/server/token | awk -F ":" ' { print $4} '
The next steps will be done on the worker nodes to install k3s on them.
Comments
Comments powered by giscus, use a GitHub account to comment.