Greetings and welcome to this tutorial on how to set up Kubernetes Cluster on Linux With k0s. Kubernetes commonly known as k8s is an open-source container orchestration platform that makes containerized application automation, deployment, scaling, and management so easy. This tool was developed by Google and is currently maintained by the Cloud Native Computing Foundation (CNCF).
It works by disseminating the workload to a server pool. It also works continuously to maintain the desired state of containers, allocating storage and persistent volumes etc. A Kubernetes cluster consists of two main types of nodes. These are:
- Master nodes: They control and manage the worker nodes. They are responsible for making decisions about the clusters. This includes scheduling the workloads, monitoring the health of the cluster, and scaling the applications based on demand. The master node consists of several other components that include:
- etcd: it stores the cluster data
- kube-apiserver: it is used to expose the Kubernetes API
- kube-scheduler: it watches for the newly created Pods with no assigned node, and selects a node for them to run on.
- Worker nodes: These are the nodes that run the containerized workload. They are responsible for hosting the pods which make the basic component of applications. A pod can be defined as the smallest deployable unit in Kubernetes
K0s is an open-source lightweight, easy-to-install, and highly available Kubernetes distribution designed to run on any environment, from IoT devices to cloud platforms. It was created by the team at Mirantis(a leading provider of Kubernetes and cloud-native solutions).
It offers a minimalistic approach to Kubernetes yet with all the features required to run a Kubernetes cluster. It comes with a single binary file that contains all the necessary components to run a Kubernetes cluster.
The features associated with k0s are:
- It has multiple installation methods such as single-node, multi-node, airgap and Docker.
- Flexible deployment options with control plane isolation as default
- It is certified and 100% upstream Kubernetes
- Highly available and fault-tolerant, with built-in mechanisms for handling node failures and network partitions.
- It Includes Konnectivity service, CoreDNS and Metrics Server
- Minimum CPU requirements (1 vCPU, 1 GB RAM) and supports x86-64, ARM64 and ARMv7
- Supports a variety of datastore backends. etcd is the default for multi-node clusters, SQLite for single-node clusters, MySQL, and PostgreSQL can be used as well.
- Security-focused and includes features such as TLS encryption, RBAC, and network policies.
Now let’s explore it!
Environment Setup
For this guide, I will have 3 servers configured as below;
ROLE | HOSTNAME | IP ADDRESS |
Control plane | master.tutornix.com | 192.168.205.11 |
Worker Node1 | worker1.tutornix.com | 192.168.205.22 |
Worker Node2 | worker2.tutornix.com | 192.168.205.2 |
Now on the hosts, make sure that cURL is installed:
##Debian/Ubuntu
sudo apt update && sudo apt install curl -y
##Rhel/Rocky/ALma/centOS
sudo yum install curl -y
Allow the required ports through the firewall:
##For UFW
sudo ufw allow 6443,2380,10250,9443,8132,8133/tcp
sudo ufw allow 4789/udp
#For Firewalld
sudo firewall-cmd --add-port={6443,2380,10250,9443,8132,8133}/tcp --permanent
sudo firewall-cmd --add-port=4789/udp --permanent
sudo firewall-cmd --reload
Once installed, pull the k0s binary:
curl -sSLf https://get.k0s.sh | sudo sh
Export the $PATH:
echo "export PATH=\$PATH:/usr/local/bin" | sudo tee -a /etc/profile
source /etc/profile
#1. Configure k0s Master Node
Now on the selected master node, we need to make several configurations. First, switch to the root user:
sudo su -
Now install the k0s controller and enable the worker:
k0s install controller --enable-worker
Once the installation is complete, start and enable the controller with the command:
systemctl start k0scontroller
systemctl enable k0scontroller
Verify that the service is running:
# systemctl status k0scontroller
● k0scontroller.service - k0s - Zero Friction Kubernetes
Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-04-16 15:05:34 EAT; 7s ago
Docs: https://docs.k0sproject.io
Main PID: 3506 (k0s)
Tasks: 7
Memory: 69.1M
CGroup: /system.slice/k0scontroller.service
└─3506 /usr/local/bin/k0s controller --enable-worker=true
Elb 16 15:05:40 master.tutornix.com k0s[3506]: time="2023-04-16 15:05:40" level=info msg="initializing APIServer"
Elb 16 15:05:40 master.tutornix.com k0s[3506]: time="2023-04-16 15:05:40" level=info msg="initializing K0sControllersLeaseCounter"
....
Now at this point, your cluster should be up with a single node. Wait for all the pods to start and verify with the command:
# k0s kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7bf57bcbd8-27cq5 1/1 Running 0 115s
kube-system konnectivity-agent-hst7c 1/1 Running 0 111s
kube-system kube-proxy-fvwvv 1/1 Running 0 110s
kube-system kube-router-s94ds 1/1 Running 0 111s
kube-system metrics-server-7446cc488c-hzlhh 1/1 Running 0 115s
Check the available nodes with the command:
# k0s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master.tutornix.com Ready control-plane 2m2s v1.26.3+k0s 192.168.205.11 <none> Ubuntu 20.04.4 LTS 5.13.0-30-generic containerd://1.6.18
#2. Add Nodes to the k0s Cluster
Now we need to add more nodes to the k0s cluster. But before that, we need to create login tokens on the master node. This can be done using the commands:
##For Worker Node
k0s token create --role=worker
##For Control Node
k0s token create --role=controller
Once the token has been created, you can join the cluster using the commands with the below syntax:
##On Worker Node
sudo -i
k0s worker "<login-token>" &
##On Control Node
sudo -i
k0s controller "<token>"
For example, I will add my worker nodes to the cluster by running the below command:
sudo -i
k0s worker "H4sIAAAAAAAC/2xV3W6r.....eGBg2DYwSwXyGyJYq2WmA3UREvgZWMiTvOdKH3KWA4JdL15+r+Yy+2mNcZ6L5xb+Y7D9LTNP/wkAAP//Z1FpQwUHAAA=" &
After running the command on both nodes, they should be able to join the cluster now.
#3. Install kubectl on Linux
To manage the cluster easily, I will install kubectl on the desired workstation(this can be the master node or any other remote node).
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/ /bin
Once installed, you need to get the kubeconfig file on the master node:
mkdir ~/.kube
sudo cp /var/lib/k0s/pki/admin.conf ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
If you are running kubectl on a remote node, obtain the config from the master with the command:
mkdir ~/.kube
scp <username>@<SERVER_IP>:~/.kube/config/admin.conf
After that, get the nodes:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master.tutornix.com Ready control-plane 7m52s v1.26.3+k0s 192.168.205.11 <none> Ubuntu 20.04.4 LTS 5.13.0-30-generic containerd://1.6.18
worker1.tutornix.com Ready <none> 3m32s v1.26.3+k0s 192.168.205.22 <none> Ubuntu 22.04 LTS 5.15.0-41-generic containerd://1.6.18
worker2.tutornix.com Ready <none> 71s v1.26.3+k0s 192.168.205.2 <none> Rocky Linux 8.6 (Green Obsidian) 4.18.0-372.9.1.el8.x86_64 containerd://1.6.18
Alternatively, you can export the config temporarily:
export KUBECONFIG=/var/lib/k0s/pki/admin.conf
#4. Deploy an Application on k0s
Now after the cluster is up, we can test it by deploying a simple application.
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
Verify if the pods are running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b7f675859-kw976 1/1 Running 0 28s
nginx-deployment-6b7f675859-zj2xc 1/1 Running 0 28s
You can then expose the service. For this guide, we will use NodePort:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
Check the services:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
nginx-deployment NodePort 10.98.233.246 <none> 80:30752/TCP 8s
Now try and access the service from your web browser. For this demo, it is exposed on port 30752. To access the service use the URL http://node_IP:30752
#5. Uninstalling k0s Kubernetes Cluster Locally
If you want to tear down the k0s Kubernetes cluster. You first need to stop the k0s service:
sudo k0s stop
Now remove k0s with the command:
sudo k0s reset
The end!
I was excited to uncover this site. I need to to thank you for your time for this particularly wonderful read!! I definitely savored every part of it and I have you saved to fav to see new stuff in your site.
We really appreciate it! We promise to provide more amazing guides
This is some good stuff on k8s!
Thanks for your feedback!