HomeCONTAINERSDeploy MinIO S3 Storage on Kubernetes Cluster

Deploy MinIO S3 Storage on Kubernetes Cluster

Greetings and welcome to this guide on how to deploy Minio S3 Storage on Kubernetes cluster. But before we begin, we need to learn a few concepts.

MinIO is an object storage that serves as a drop-in replacement for Amazon Web Services S3 object storage. It provides an S3-compatible API and also supports all the core s3 features. The cool feature about Minio is that it is built to be deployed anywhere, be it on private or public cloud, bare-metal, orchestrated or even edge environments.

The other cool benefits of MinIO are:

  • Monitoring: It provides a detailed performance analysis capability with lots of metrics and per-operation logging.
  • Continuous Replication: MinIO is developed with the ability to be scaled with cross-data centre deployments thus eliminating the challenge with traditional replication approaches that do not scale effectively beyond a few hundred TiB
  • High performance: it is the fastest object storage with the GET/PUT throughput of 325 and 165 GiB/sec respectively on just 32 nodes of NVMe.
  • Identity Management: It supports the most advanced standards in identity management, with the ability to integrate with the OpenID connect compatible providers as well as key external IDP vendors.
  • Data life cycle management and Tiering: This helps protect the data within and across public and private clouds.

The below steps will enable you deploy MinIO S3 Storage on Kubernetes Cluster

Getting Ready

Before we begin, you need to have a Kubernetes cluster set up and running. We have several guides to enable you to spin a K8s cluster:

Once the cluster has been set up, confirm you can connect to it using kubectl client:

$ kubectl get nodes -o wide
NAME                   STATUS   ROLES           AGE     VERSION       INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION              CONTAINER-RUNTIME
master.tutornix.com    Ready    control-plane   7m52s   v1.26.3+k0s   192.168.205.11   <none>        Ubuntu 20.04.4 LTS                 5.13.0-30-generic           containerd://1.6.18
worker1.tutornix.com   Ready    <none>          3m32s   v1.26.3+k0s   192.168.205.22   <none>        Ubuntu 22.04 LTS                   5.15.0-41-generic           containerd://1.6.18
worker2.tutornix.com   Ready    <none>          71s     v1.26.3+k0s   192.168.205.2    <none>        Rocky Linux 8.6 (Green Obsidian)   4.18.0-372.9.1.el8.x86_64   containerd://1.6.18

1. Create a Persistent Storage For MinIO

Minio S3 Storage requires a persistent volume to store its data. For this guide, we will use a Local Path provisioner to create a hostPath PV. Creating a PV will follow the 3 steps below

Create a StorageClass

Begin by creating a storage class:

vim minio-storageClass.yml

In the opened file, paste the below content:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: minio-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Apply the manifest to the cluster:

$ kubectl create -f minio-storageClass.yml
storageclass.storage.k8s.io/minio-storage created

Create a Persistent Volume(PV)

After creating the storage class, proceed and create a persistent volume:

vim minio-pv.yml

In the opened file, add this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: minio-local-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: minio-storage
  local:
    path: /mnt/minio/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker1.tutornix.com

Here, we will create a PV that has an affinity to worker1 in our cluster. Before we proceed, we will create the directory on the host:

sudo mkdir -p /mnt/minio/data
sudo chmod 777 /mnt/minio/data

If it is a Rhel-based system, configure SELinux as shown:

sudo chcon -Rt svirt_sandbox_file_t /mnt/minio/data

Now create the PV with the command:

kubectl create -f minio-pv.yml

Create a Persistent Volume Claim(PVC)

The final thing here is to create a PVC. Create the manifest file:

$ vim minio-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
spec:
  storageClassName: minio-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Apply the manifest:

kubectl create -f minio-pvc.yml

Verify if the PV has been create:

$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                     STORAGECLASS       REASON   AGE
minio-local-pv    10Gi        RWX            Retain           Available                               miono-storage               110s

We now have a persistent storage configured and ready to be used by MinIO.

2. Deploy MinIO S3 Storage on Kubernetes

Now we need to create a manifest for the MinIO S3 Storage deployment. There is a sample YAML that can be used for MinIO deployment from the official MinIO page. But for our guide, we will modify it to suit our environment.

vim minio-deployment.yml

In the file, add these lines.

apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio
spec:
  selector:
    matchLabels:
      app: minio # has to match .spec.template.metadata.labels
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # This label is used as a selector in Service definition
        app: minio
    spec:
      volumes:
      - name: data
        # This volume is based on PVC
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pvc
      containers:
      - name: minio
        # Volume mounts for this container
        volumeMounts:
        # Volume 'data' is mounted to path '/data'
        - name: data 
          mountPath: /data
        # Pulls the latest Minio image from Quay
        image: quay.io/minio/minio:latest
        command:
        - /bin/bash
        - -c
        args:
        - minio server /data --console-address :9001
        env:
        # MinIO access key and secret key
        - name: MINIO_ROOT_USER
          value: "minio"
        - name: MINIO_ROOT_PASSWORD
          value: "minio123"
        ports:
        - containerPort: 9000
        # Readiness probe detects situations when MinIO server instance
        # is not ready to accept traffic. Kubernetes doesn't forward
        # traffic to the pod while readiness checks fail.
        readinessProbe:
          httpGet:
            path: /minio/health/ready
            port: 9000
          initialDelaySeconds: 120
          periodSeconds: 20
        # Liveness probe detects situations where MinIO server instance
        # is not working properly and needs restart. Kubernetes automatically
        # restarts the pods if liveness checks fail.
        livenessProbe:
          httpGet:
            path: /minio/health/live
            port: 9000
          initialDelaySeconds: 120
          periodSeconds: 20

Apply the manifest:

kubectl apply -f minio-deployment.yml

Check if the pods are up:

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
minio-9q43749d3-pmf95   1/1     Running   0          30s

At this point, the PV also needs to be bound:

$ kubectl get pv
NAME              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                      STORAGECLASS       REASON   AGE
minio-local-pv    10Gi        RWX            Retain           Bound    default/minio-pvc        my-local-storage                   3m30s

3. Expose MinIO S3 Storage API and Console Services

We need to expose the two MinIO ports 9001 for the console and 9000 for the API. To achieve that, we will create two services as shown:

vim minio-svc.yaml

Add these lines to the file:

apiVersion: v1
kind: Service
metadata:
  name: minio-api-service
spec:
  type: NodePort
  ports:
    - name: http
      port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio
---
apiVersion: v1
kind: Service
metadata:
  name: minio-console-service
spec:
  type: NodePort
  ports:
    - name: http
      port: 9001
      targetPort: 9001
      protocol: TCP
  selector:
    app: minio

Apply the manifest:

kubectl apply -f minio-svc.yaml

Get the Nodeports:

$ kubectl get svc
minio-api-service                  NodePort       10.103.155.140   <none>        9000:30862/TCP   52s
minio-console-service         NodePort       10.96.171.91     <none>        9001:32437/TCP   53s

4. Access and Use MinIO S3 Storage Console

You can access the MinIO S3 Storage Console using the Nodeport to which port 9001 has been exposed. In my case, I will use port 32437 with the URL http://NodeIP:32437

Authenticate using the credential set earlier for the root user.

Once connected, you will see this interface. Now feel free to manage your buckets and objects as desired.

Verdict

That is the and of this detailed guide on how to deploy Minio S3 Storage on Kubernetes cluster. I hope you have benefitted from it. Feel free to share your feedback in the comments below.

See more:

- Advertisment -

Recent posts

LEAVE A REPLY

Please enter your comment!
Please enter your name here