With the release of LINSTORⓇ operator 2.10.0 for Kubernetes, users can create ReadWriteMany (RWX) Persistent Volume Claims (PVCs) directly using the LINSTOR CSI driver. This feature enables multiple pods to mount and write to the same LINSTOR-backed persistent volume simultaneously. RWX support is a hard requirement for applications that require shared storage, and the addition of this feature provides LINSTOR users a “native” solution, as opposed to asking users to manage NFS pods or similar workarounds.
Creating a PVC with ReadWriteMany access
From a Kubernetes user’s perspective, enabling RWX support is straightforward. Simply create a PVC that requests the ReadWriteMany access mode from a LINSTOR-backed Kubernetes storage class.
spec:
accessModes:
- ReadWriteMany
Here’s an example YAML that creates an RWX PVC from a LINSTOR storage class named linstor-csi-lvm-thin-r3 and attaches it to two pods:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-rwx-pvc-0
spec:
storageClassName: linstor-csi-lvm-thin-r3
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4G
---
kind: Pod
apiVersion: v1
metadata:
name: demo-rwx-pod-0
namespace: default
spec:
containers:
- name: demo-rwx-pod-0
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 1000s; done"]
volumeMounts:
- mountPath: "/data"
name: demo-vol
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-rwx-pvc-0
---
kind: Pod
apiVersion: v1
metadata:
name: demo-rwx-pod-1
namespace: default
spec:
containers:
- name: demo-rwx-pod-1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 1000s; done"]
volumeMounts:
- mountPath: "/data"
name: demo-vol
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-rwx-pvc-0
📝 NOTE: The
storageClassNamespecified in this YAML file should be one that specifies aparameters.autoPlacevalue greater than or equal to2, in a LINSTOR cluster with three or more nodes, to support high availability (HA) of the RWX volume.
Both pods defined in this YAML configuration can read and write to the same /data/ mount point simultaneously. A basic test of concurrent access could be to use one pod to update the date in a file named /data/date.txt while the other pod continuously reads the same file:
kubectl exec demo-rwx-pod-0 -- sh -c "sleep 3s; date > /data/date.txt" & \
for i in {0..5}; do kubectl exec -it demo-rwx-pod-1 -- sh -c "cat /data/date.txt"; sleep 1s; done
Which would output something similar to the following output, assuming the file has previously been created and populated with a date:
[1] 111996
Thu Oct 16 16:58:33 UTC 2025
Thu Oct 16 16:58:33 UTC 2025
Thu Oct 16 16:58:33 UTC 2025
[1]+ Done kubectl exec demo-rwx-pod-0 -- sh -c "sleep 3s; date > /data/date.txt"
Thu Oct 16 16:58:50 UTC 2025
Thu Oct 16 16:58:50 UTC 2025
Thu Oct 16 16:58:50 UTC 2025
In this example output, notice the content of date.txt is updated by demo-rwx-pod-0 while being read by demo-rwx-pod-1.
How LINSTOR RWX support works internally
Under the hood, the RWX feature is implemented by using a new DaemonSet named, linstor-csi-nfs-server. This component runs DRBD Reactor internally to make the NFS exports highly available.
❗ IMPORTANT: DRBD Reactor by default relies on DRBDⓇ quorum for making the services it manages HA. RWX volumes should be provisioned from a LINSTOR
StorageClasswith aparameters.autoPlacevalue greater than or equal to2, in a LINSTOR cluster with three or more nodes, to support HA of RWX volumes.
A somewhat simplified list describing what the LINSTOR Operator does when a new RWX PVC is requested:
- The CSI driver generates a DRBD Reactor configuration to mount the DRBD-backed volume on the NFS server node.
- A client recovery directory is set up as a second volume within the DRBD resource for proper state tracking for each NFS export.
- A dedicated NFS-Ganesha (the user space NFS implementation) instance is launched for that volume.
- A Kubernetes Service is created to expose the NFS export.
- When the volume is attached to a pod, the
linstor-csidriver mounts it using NFS (for example,mount -t nfs ...) instead of mounting the DRBD block device directly.
You can inspect what DRBD Reactor is doing by running drbd-reactorctl status in one of the linstor-csi-nfs-server pods. Enter the following command to loop through all linstor-csi-nfs-server pods and run drbd-reactorctl status in each:
for i in \
$(kubectl get pods -n linbit-sds -l "app.kubernetes.io/component"="linstor-csi-nfs-server" -o name | sed 's/^pod\///'); \
do kubectl exec -n linbit-sds -it $i -- sh -c "drbd-reactorctl status"; \
done
The output will look similar to the following example below (truncated to show a single active pod’s output):
[..snip..]
/etc/drbd-reactor.d/pvc-853df71b-a838-4e69-87c6-19d03e129f70.toml:
Promoter: Currently active on this node
● drbd-services@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.target
● ├─ drbd-promote@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ prepare-device-links@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ clean-nfs-endpoint@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ mount-recovery@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ mount-export@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ growfs@srv-exports-pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.timer
● ├─ chmod@srv-exports-pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● ├─ nfs-ganesha@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
● └─ advertise-nfs-endpoint@pvc\x2d853df71b\x2da838\x2d4e69\x2d87c6\x2d19d03e129f70.service
[..snip..]
📝 NOTE: Almost all other existing LINSTOR concepts such as storage pools, replica counts, and placement policies continue to work as expected with RWX volumes. Any notable exceptions to this can be found in the release announcement on the LINBIT community forums.
Potential use cases for LINSTOR RWX volumes
RWX volumes open the door to a wide range of Kubernetes workloads that rely on shared storage. The following subsections describe some common examples.
Web applications with shared data (uploads, images, and so on)
Applications such as WordPress, Drupal, or Nextcloud often require multiple replicas to access a shared uploads directory. RWX volumes allow all instances to read and write simultaneously without special application-level synchronization.
Shared scratch space for data processing or machine learning (ML) jobs
Batch processing frameworks, ML training jobs, or pipelines that run in parallel on multiple nodes can use a shared RWX volume as a common workspace for temporary data, model checkpoints, or intermediate outputs.
Centralized configuration or cache directories
Multiple services can mount a single RWX volume for shared configuration files or cached assets, simplifying application deployment.
Legacy applications requiring shared POSIX file semantics
For applications that are not cloud-native but expect a shared file system, RWX volumes provide an easy way to lift-and-shift them into a Kubernetes environment without architectural changes.
What else is new in LINSTOR Operator v2.10.0
Version 2.10.0 of the LINSTOR Operator also introduces group snapshot support for LINSTOR PVCs, enabling consistent point-in-time snapshots across multiple related volumes. Stay tuned for an upcoming blog post covering that feature in more detail.
Final thoughts
The addition of RWX support marks a significant milestone for LINSTOR in Kubernetes environments. Until version 2.10.0, LINSTOR volumes were limited to ReadWriteOnce (RWO) access (without workarounds), which restricted simultaneous multi-pod access to a single volume. With native RWX support, stateful applications that rely on shared storage such as web frontends behind a load balancer, CI/CD systems, or shared data processing pipelines can now be used along side the highly performant RWO volumes without requiring workarounds or a supplemental persistent storage solution.
If you are new to LINSTOR in Kubernetes and would like to test it out, and deploy it by using the convenience of container images in the official LINBIT container registry, drbd.io, reach out to the LINBIT team to request evaluation access. Alternatively, you can use the freely available upstream project, the Piraeus Operator, to deploy LINSTOR in Kubernetes.