Kubernetes - Install Kubernetes on the worker nodes#
212 words | 3 min read
In the previous post, we installed k3s on the master node. Now, we will install k3s on the worker nodes and configure them.
As a reminder, this is what the nodes in our cluster look like:
Prerequisite configurations#
We will perform the remaining steps as “root” (or prefix each command with “sudo”)
sudo su -
1. Create a YAML config file#
Create the /etc/rancher/k3s/config.yaml on each worker node.
You will need the server token that you extracted.
Enter it in the config file as shown below.
#/etc/rancher/k3s/config.yaml
token: <ENTER_THE_SERVER_TOKEN_HERE>
kubelet-arg:
- config=/etc/rancher/k3s/kubelet.config
node-label:
- "worker"
2. Create the kubelet.config file#
Create the /etc/rancher/k3s/kubelet.config file on each worker node with the following contents.
# /etc/rancher/k3s/kubelet.config
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
3. Install k3s on each worker node#
export INSTALL_K3S_CHANNEL=stable
/usr/bin/curl -sfL https://get.k3s.io | K3S_URL='https://salt:6443' K3S_TOKEN=<ENTER_THE_SERVER_TOKEN_HERE>
Wait for the Kubernetes installation to finish on each worker node.
Now, switch back to the master node and check the nodes in the cluster.
k get nodes
You should see the status similar to the following”
NAME STATUS ROLES AGE VERSION
kube001 NotReady <none> 62s v1.30.2+k3s2
kube002 NotReady <none> 53s v1.30.2+k3s2
salt NotReady control-plane,etcd,master 5d18h v1.30.2+k3s2
Note
The results are as expected. The nodes are expected to be in a “NotReady” status because we have not yet configured networking in our k3s cluster.
Next, we will configure Cilium as our Container Network Interface (CNI) and set up networking for our k3s cluster.
We will do this from the master node.
Comments
Comments powered by giscus, use a GitHub account to comment.