It’s been over three years since LINBIT released the Kubernetes CSI plugin for LINSTOR. While much has changed in how LINSTOR is typically deployed into Kubernetes clusters – I’m referring to the LINSTOR Operator for Kubernetes – the CSI plugin is still the bridge between Kubernetes and LINSTOR’s replicated block storage. While LINBIT encourages its customers and users to leverage the LINSTOR Operator for deploying containerized LINSTOR clusters into Kubernetes, it’s still possible to run LINSTOR outside of Kubernetes with only the LINSTOR CSI plugin components running in Kubernetes.
While there is plenty of documentation surrounding LINBIT’s Helm operator’s deployment of LINSTOR, “CSI only” instructions exist only on the CSI plugin’s GitHub. This blog post sheds light on the alternative architecture where an administrator wants to deploy LINSTOR outside Kubernetes and integrate it with Kubernetes via the CSI plugin.
Prerequisites and Assumptions
To keep this blog as concise as possible, we’ll assume you already have the following:
- Created and initialized a LINSTOR cluster
- Created a storage pool within LINSTOR
- Created and initialized a Kubernetes cluster
There are many options and paths available to satisfy the prerequisites above. We’ll assume a three-node LINSTOR cluster was created and initialized on the same nodes the Kubernetes cluster was created and initialized. We’ll also assume the storage pool made was an `LVM_THIN` type named `lvm-thin` in LINSTOR.
Preparing and Installing the LINSTOR CSI Plugin
Once you’ve initialized your Kubernetes and LINSTOR clusters, you can begin preparing the LINSTOR CSI Plugin configuration for deployment into Kubernetes.
First, you’ll need to download an example configuration from the LINSTOR CSI Plugin’s GitHub. You should see examples for many recent Kubernetes versions. You should select the configuration version closest to your Kubernetes version. If your Kubernetes version is newer than any versioned yaml, the latest available version or the `dev` version will be the best configuration with which to start.
# curl -O
With the configuration downloaded and in our current working directory, we only need to replace the string “$(LINSTOR_IP)” with the endpoint for your LINSTOR Controller’s REST API. By default, the LINSTOR Controller’s endpoint can be accessed using HTTP on port 3370. For a LINSTOR Controller with an IP address of 192.168.222.40, the default endpoint would be “http://192.168.222.40:3370”. Therefore, the following commands could be used to update the appropriate string in our example configuration with our LINSTOR Controller’s endpoint.
# export CTRL_IP=192.168.222.40 # sed -i s/\$\(LINSTOR_IP\)/http\:\\/\\/$CTRL_IP\:3370/g linstor-csi-1.19.yaml
TIP: If you’ve configured your LINSTOR Controller for HA using DRBD Reactor, you should use the virtual IP address for your LINSTOR endpoint.
Finally, deploy the LINSTOR CSI Plugin into your Kubernetes cluster using kubectl.
# kubectl apply -f linstor-csi-1.19.yaml
Shortly after, you should see your LINSTOR CSI Plugin pods running in the `kube-system` namespace.
Define Kubernetes Storage Classes for LINSTOR
With your LINSTOR cluster integrated into Kubernetes, you can now define how you’d like LINSTOR’s persistent volumes to be provisioned by Kubernetes users. This is done via Kubernetes storage classes and the LINSTOR parameters specified on each.
LINSTOR storage classes, in their simplest form, contain the LINSTOR storage pool’s name and the number of replicas that should be created in LINSTOR for each persistent volume. The following example storage class definitions will create three separate storage classes defining 1, 2, and 3 replicas from our `lvm-thin` LINSTOR storage-pool, respectively.
# cat << EOF > linstor-sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "linstor-csi-lvm-thin-r1" provisioner: linstor.csi.linbit.com parameters: autoPlace: "1" storagePool: "lvm-thin" reclaimPolicy: Delete allowVolumeExpansion: true --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "linstor-csi-lvm-thin-r2" provisioner: linstor.csi.linbit.com parameters: autoPlace: "2" storagePool: "lvm-thin" reclaimPolicy: Delete allowVolumeExpansion: true --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "linstor-csi-lvm-thin-r3" provisioner: linstor.csi.linbit.com parameters: autoPlace: "3" storagePool: "lvm-thin" reclaimPolicy: Delete allowVolumeExpansion: true EOF
Finally, create the LINSTOR storage classes using kubectl.
# kubectl apply -f linstor-sc.yaml
You should now see that you have three new storage classes defined in Kubernetes.
Create a PVC and Pod for Testing your Deployment
To test your deployment, define and create a PVC in Kubernetes specifying one of the three storage classes created in the section above. I’ll use the three replica claim in the following example.
# cat << EOF > linstor-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: demo-vol-claim-0 spec: storageClassName: linstor-csi-lvm-thin-r1 accessModes: - ReadWriteOnce resources: requests: storage: 4G EOF # kubectl apply -f linstor-pvc.yaml
You should now see you have a LINSTOR provisioned PVC in Kubernetes and a new, unused resource in LINSTOR.
Next, define and deploy a pod that will use the LINSTOR-backed PVC.
# cat << EOF > demo-pod.yaml apiVersion: v1 kind: Pod metadata: name: demo-pod-0 namespace: default spec: containers: - name: demo-pod-0 image: busybox command: ["/bin/sh"] args: ["-c", "while true; do sleep 1000s; done"] volumeMounts: - mountPath: "/data" name: demo-vol volumes: - name: demo-vol persistentVolumeClaim: claimName: demo-vol-claim-0 EOF # kubectl apply -f demo-pod.yaml
Shortly after deploying, you will see you have a running pod and your LINSTOR resource listed as `InUse`.
Hopefully, you’ve found this information helpful in your quest to deploy LINSTOR outside of Kubernetes. Again, if you’re new to LINSTOR in Kubernetes, LINBIT recommends starting your testing using the LINSTOR Helm Operator for Kubernetes. If you have any questions about either approach, LINBIT is always available to talk with their users and customers – all you have to do is reach out.