Integrating External LINSTOR Clusters With Kubernetes by Using the LINSTOR CSI Driver

Much has changed in how LINSTOR® is typically deployed into Kubernetes clusters over the years. I’m referring to the LINSTOR Operator for Kubernetes which is now at a v2 release, but the CSI plugin is still the bridge between Kubernetes and LINSTOR-managed replicated block storage. While LINBIT® encourages its customers and users to use the LINSTOR Operator for deploying containerized LINSTOR clusters into Kubernetes, you can run LINSTOR outside of Kubernetes with only the LINSTOR CSI plugin components running in Kubernetes.

There is plenty of documentation around how the LINSTOR Operator can help you easily deploy LINSTOR in Kubernetes, however “CSI only” instructions aren’t discussed much outside of the CSI plugin’s GitHub repository. This blog post sheds light on this alternative architecture for when an administrator wants to deploy LINSTOR outside Kubernetes and integrate it with Kubernetes by using the CSI plugin.

Prerequisites and assumptions

To keep this blog as concise as possible, I’ll assume you already have done the following:

 

There are many options and paths available to satisfy the prerequisites above. I’ll assume that you created a 3-node LINSTOR cluster on the same nodes that you created and initialized the Kubernetes cluster on. I’ll also assume the LINSTOR storage pool that you made was an LVM_THIN type named lvm-thin, although it is possible to use a storage pool backed by ZFS.

IMPORTANT: The instructions in this article depend on LINSTOR running on the same nodes as your Kubernetes cluster. For the CSI plugin to communicate with LINSTOR running in a separate cluster, you would need to either install the DRBD® 9 kernel module and install and enable the LINSTOR satellite service on your Kubernetes nodes. You would then need to add the Kubernetes nodes as LINSTOR satellite nodes in the LINSTOR cluster. The Kubernetes nodes could then use the LINSTOR diskless attachment feature to access storage in the LINSTOR cluster, over the network, as if it was local storage. Or, you could use LINSTOR Gateway to create a highly available iSCSI, NVMe-oF, or NFS target from your LINSTOR storage to expose external LINSTOR-managed storage to the Kubernetes cluster.

Some verification commands and output confirming the prerequisites as used in the examples in this article follow:

# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
kube-0   Ready    control-plane   4h41m   v1.35.3
kube-1   Ready    <none>          4h41m   v1.35.3
kube-2   Ready    <none>          4h41m   v1.35.3
# kubectl get nodes -o wide
NAME     STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                          KERNEL-VERSION                 CONTAINER-RUNTIME
kube-0   Ready    control-plane   4h42m   v1.35.3   192.168.122.140   <none>        AlmaLinux 9.7 (Moss Jungle Cat)   5.14.0-611.42.1.el9_7.x86_64   containerd://2.2.2
kube-1   Ready    <none>          4h42m   v1.35.3   192.168.122.141   <none>        AlmaLinux 9.7 (Moss Jungle Cat)   5.14.0-611.42.1.el9_7.x86_64   containerd://2.2.2
kube-2   Ready    <none>          4h42m   v1.35.3   192.168.122.142   <none>        AlmaLinux 9.7 (Moss Jungle Cat)   5.14.0-611.42.1.el9_7.x86_64   containerd://2.2.2
# linstor node list
╭────────────────────────────────────────────────────────────╮
┊ Node   ┊ NodeType  ┊ Addresses                    ┊ State  ┊
╞════════════════════════════════════════════════════════════╡
┊ kube-0 ┊ COMBINED  ┊ 192.168.122.140:3366 (PLAIN) ┊ Online ┊
┊ kube-1 ┊ SATELLITE ┊ 192.168.122.141:3366 (PLAIN) ┊ Online ┊
┊ kube-2 ┊ SATELLITE ┊ 192.168.122.142:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────╯
# linstor storage-pool list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node   ┊ Driver   ┊ PoolName          ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName      ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
[...]
┊ lvm-thin    ┊ kube-0 ┊ LVM_THIN ┊ drbdpool/thinpool ┊    30.39 GiB ┊     30.39 GiB ┊ True         ┊ Ok    ┊ kube-0;lvm-thin ┊
┊ lvm-thin    ┊ kube-1 ┊ LVM_THIN ┊ drbdpool/thinpool ┊    30.39 GiB ┊     30.39 GiB ┊ True         ┊ Ok    ┊ kube-1;lvm-thin ┊
┊ lvm-thin    ┊ kube-2 ┊ LVM_THIN ┊ drbdpool/thinpool ┊    30.39 GiB ┊     30.39 GiB ┊ True         ┊ Ok    ┊ kube-2;lvm-thin ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Preparing and installing the LINSTOR CSI plugin

After you’ve initialized your Kubernetes and LINSTOR clusters, you can deploy the LINSTOR CSI plugin configuration into Kubernetes.

Entering the command that follows will download an example configuration from the LINSTOR CSI plugin’s GitHub repository. Replace every instance of the placeholder LINSTOR_CONTROLLER_URL string in the example, and apply the configuration to your Kubernetes cluster. Set the LINSTOR_CONTROLLER_URL string to the URL, including the DNS name or IP address and TCP port number, that points to your LINSTOR controller node’s REST API. By default, access to the LINSTOR controller node endpoint is via HTTP on port 3370. For a LINSTOR controller node with a DNS name of linstor-controller.example.com, the LINSTOR controller node’s REST API endpoint would be http://linstor-controller.example.com:3370. Therefore, you could use the following commands to update the appropriate string in the example configuration and deploy the LINSTOR CSI driver into Kubernetes.

LINSTOR_CONTROLLER_URL=http://kube-0:3370
kubectl kustomize https://github.com/piraeusdatastore/linstor-csi//examples/k8s/deploy \
  | sed "s#LINSTOR_CONTROLLER_URL#$LINSTOR_CONTROLLER_URL#" \
  | kubectl apply --server-side -f -

💡 TIP: If you’ve configured your LINSTOR controller service for high availability by using DRBD Reactor, you should use the virtual IP address for your LINSTOR endpoint.

To verify the CSI driver deployment, enter the following command:

kubectl get pods --all-namespaces

Command output should show your LINSTOR CSI plugin pods running in the piraeus-datastore namespace:

NAMESPACE           NAME                                       READY   STATUS    RESTARTS   AGE
[...]
piraeus-datastore   linstor-csi-controller-5494bdd6f9-5t947    7/7     Running   0          3m1s
piraeus-datastore   linstor-csi-node-2ng89                     3/3     Running   0          3m1s
piraeus-datastore   linstor-csi-node-6vjkc                     3/3     Running   0          3m1s
piraeus-datastore   linstor-csi-node-qvlzg                     3/3     Running   0          3m1s

Defining Kubernetes storage classes for LINSTOR

With your LINSTOR cluster integrated into Kubernetes, you can now define how you want LINSTOR persistent volumes to be provisioned by Kubernetes users. You can do this by using Kubernetes storage classes and the LINSTOR parameters specified on each.

LINSTOR storage classes, in their simplest form, contain the LINSTOR storage pool name and the number of replicas that LINSTOR should create for each persistent volume. The following example storage class definitions will create three separate storage classes defining 1, 2, and 3 replicas from the lvm-thin LINSTOR storage-pool.

cat << EOF > linstor-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "linstor-csi-lvm-thin-r1"
provisioner: linstor.csi.linbit.com
parameters:
  linstor.csi.linbit.com/autoPlace: "1"
  linstor.csi.linbit.com/storagePool: "lvm-thin"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "linstor-csi-lvm-thin-r2"
provisioner: linstor.csi.linbit.com
parameters:
  linstor.csi.linbit.com/autoPlace: "2"
  linstor.csi.linbit.com/storagePool: "lvm-thin"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "linstor-csi-lvm-thin-r3"
provisioner: linstor.csi.linbit.com
parameters:
  linstor.csi.linbit.com/autoPlace: "3"
  linstor.csi.linbit.com/storagePool: "lvm-thin"
reclaimPolicy: Delete
allowVolumeExpansion: true
EOF

Finally, create the LINSTOR storage classes by applying the configuration:

kubectl apply -f linstor-sc.yaml

Enter the following command to verify:

kubectl get storageclasses.storage.k8s.io # or `kubectl get sc` if you prefer

Command output should show that you have three new storage classes defined in Kubernetes:

NAME                      PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
linstor-csi-lvm-thin-r1   linstor.csi.linbit.com   Delete          Immediate           true                   2m42s
linstor-csi-lvm-thin-r2   linstor.csi.linbit.com   Delete          Immediate           true                   2m42s
linstor-csi-lvm-thin-r3   linstor.csi.linbit.com   Delete          Immediate           true                   2m42s

Create a PVC and pod for testing your deployment

To test your deployment, define and create a PVC in Kubernetes specifying one of the three storage classes created in the previous section. The following PVC manifest uses the 3-replica storageClass:

cat << EOF > linstor-pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: demo-vol-claim-0
spec:
  storageClassName: linstor-csi-lvm-thin-r3
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
EOF

Apply the configuration to your Kubernetes cluster:

kubectl apply -f linstor-pvc.yaml

Enter the following command to verify that you have a LINSTOR-provisioned PVC in Kubernetes:

kubectl get persistentvolumeclaims

Output should show the new PVC:

NAME               STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS              VOLUMEATTRIBUTESCLASS   AGE
demo-vol-claim-0   Bound    pvc-[...]   4Gi        RWO            linstor-csi-lvm-thin-r3   <unset>                 63s

Verify that you also have a LINSTOR-created resource backing the PVC by listing LINSTOR resources:

linstor resource list

Output should show a new LINSTOR resource with a name containing a UUID that matches the UUID of the PVC volume name in Kubernetes:

╭──────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node   ┊ Layers       ┊ Usage  ┊ Conns                     ┊    State ┊ CreatedOn ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-[...]    ┊ kube-0 ┊ DRBD,STORAGE ┊ Unused ┊ StandAlone(kube-2)        ┊ UpToDate ┊ [...]     ┊
┊ pvc-[...]    ┊ kube-1 ┊ DRBD,STORAGE ┊ Unused ┊ StandAlone(kube-2)        ┊ UpToDate ┊ [...]     ┊
┊ pvc-[...]    ┊ kube-2 ┊ DRBD,STORAGE ┊ Unused ┊ StandAlone(kube-1,kube-0) ┊ UpToDate ┊ [...]     ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

After applying the configuration, you will have a LINSTOR-provisioned PVC in Kubernetes and a new, unused resource in LINSTOR.

Next, define and deploy a pod that will use the LINSTOR-backed PVC:

cat << EOF > demo-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-0
  namespace: default
spec:
  containers:
  - name: demo-pod-0
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do sleep 1000s; done"]
    volumeMounts:
      - mountPath: "/data"
        name: demo-vol
  volumes:
    - name: demo-vol
      persistentVolumeClaim:
        claimName: demo-vol-claim-0
EOF

Apply the configuration to your Kubernetes cluster:

kubectl apply -f demo-pod.yaml

Verify that the pod is up and running:

kubectl get pods -o wide

Output should show a running pod:

NAME         READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
demo-pod-0   1/1     Running   0          58s   172.16.79.130   kube-2   <none>           <none>

Verify that the LINSTOR resource backing the PVC is now in use:

linstor resource list

Output should show the LINSTOR resource that backs the PVC associated with the pod is InUse on the node shown in the kubectl get pods output:

╭──────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node   ┊ Layers       ┊ Usage  ┊ Conns                     ┊    State ┊ CreatedOn ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-[...]    ┊ kube-0 ┊ DRBD,STORAGE ┊ Unused ┊ StandAlone(kube-2)        ┊ UpToDate ┊ [...]     ┊
┊ pvc-[...]    ┊ kube-1 ┊ DRBD,STORAGE ┊ Unused ┊ StandAlone(kube-2)        ┊ UpToDate ┊ [...]     ┊
┊ pvc-[...]    ┊ kube-2 ┊ DRBD,STORAGE ┊ InUse  ┊ StandAlone(kube-1,kube-0) ┊ UpToDate ┊ [...]     ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

Conclusion

Hopefully, you’ve found this information helpful in your quest to deploy LINSTOR outside of Kubernetes. Again, if you’re new to LINSTOR in Kubernetes, LINBIT recommends starting your testing by using the LINSTOR Operator for Kubernetes. If you have any questions about either approach, the LINBIT team is always available to talk with their users and customers. All you have to do is reach out.

 


 

Changelog

2026-04-10:

  • Technical review and update.
  • Added mentions of alternative disaggregated deployment options for integrating LINSTOR outside of Kubernetes.
  • Replaced screen grabs with commands and command output in code blocks.
  • 4G PVC storage size changed to 4Gi to align with binary storage size units used by DRBD.
  • Updated and verified links and URLs.
  • Made language touch-ups.

 

2024-08-07:

  • Technical review and update.

 

2022-06-02:

  • Originally published article.
Picture of Matt Kereczman

Matt Kereczman

Matt Kereczman is a Solutions Architect at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT's technical team, and plays an important role in making LINBIT and LINBIT's customer's solutions great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt's hobbies.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.