While NFS (Network File System) isn’t a new or exciting technology, it is one of the more common storage solutions for Kubernetes clusters that require persistent storage. Particularly, applications or pods requiring shared, network-based, “read write many” (RWX) access to storage in Kubernetes can benefit from the familiar storage solution.
LINBIT® supports clients who build highly available NFS clusters by using commercial off-the-shelf (COTS) hardware and open source software. This means organizations can avoid purchasing expensive NAS or SAN hardware which are usually bundled with proprietary software. High availability (HA) NFS clusters can also be shared between many platforms on the same network. This gives you greater flexibility, for example, when compared to a hyperconverged storage solution that might be confined to providing storage to the systems or platform it’s installed on. This makes an HA NFS cluster a cost-effective, flexible, and relatively simple storage solution for Kubernetes deployments.
How LINBIT supports HA NFS clustering for Kubernetes
LINBIT software supports HA NFS clustering in many ways, with DRBD® existing at the core of almost all of the projects and products LINBIT supports and develops. DRBD is the block storage driver that enables synchronous replication of data between cluster nodes. DRBD ensures that you have multiple identical copies of your data on more than one cluster node, enabling HA at the storage level.
Kubernetes-native HA NFS with LINSTOR
Since LINSTOR® Operator v2.10.0 for Kubernetes, users can create ReadWriteMany (RWX) Persistent Volume Claims (PVCs) directly by using the LINSTOR CSI driver. This feature enables multiple pods to mount and write to the same LINSTOR-backed persistent volume simultaneously. Behind the scenes, the LINSTOR Operator uses NFS-Ganesha and DRBD Reactor to create RWX PVCs.
If you want to learn more about using LINSTOR to manage HA HFS natively in Kubernetes, including step-by-step getting started instructions, you can read “Creating ReadWriteMany Persistent Volumes in Kubernetes by using LINSTOR” on the LINBIT blog.
Disaggregated HA NFS in Kubernetes
In most cases, using the LINSTOR Operator to manage HA NFS exports in Kubernetes is recommended. However, there are cases where an administrator might need other options, for example, when legacy non-Kubernetes workloads also need to access the HA NFS share, or when the NFS server needs to be accessible outside the Kubernetes cluster, or if an administrator is already managing HA with LINBIT VSAN, DRBD Reactor, or Pacemaker and wants to build on that.
LINBIT VSAN combines DRBD®, LINSTOR®, and a web front end into a Linux distribution that can be used to create HA NFS exports (and more). The LINBIT VSAN appliance can be installed on hardware or inside of virtual machines, and gives system admins and engineers an “easy mode” for creating HA storage clusters which can be used as Kubernetes persistent storage.
DRBD Reactor, used together with its promoter plugin, is a LINBIT-developed cluster resource manager (CRM) that can make almost any service that uses a DRBD device for its storage highly available. DRBD Reactor watches the quorum state of the DRBD devices it manages, and its promoter plugin can promote a DRBD device and start services that depend on that DRBD device when necessary to keep that service running in the cluster. HA NFS is one of the common services that you can manage with DRBD Reactor.
Finally, LINBIT also supports Pacemaker, the community developed CRM for building HA clusters. In a Pacemaker cluster, DRBD replicates the data, while Pacemaker manages where services are running in the cluster. Pacemaker is highly configurable and because of this has a higher learning curve for configuration and management. DRBD Reactor is a simpler solution, but sometimes the need for complex ordering and collocation makes Pacemaker a better fit for building HA NFS clusters for Kubernetes.
With any of the disaggregated HA NFS solutions listed above, you’ll end up with HA NFS exports accessible over a virtual IP (VIP) address. This address is able to “float” around the cluster, and will be assigned wherever your NFS export is active. In the Kubernetes YAML manifests below, the VIP configured from our HA NFS cluster will be 192.168.222.25/24, and the exported file system root export will be /drbd/exports/, which is where the DRBD device is mounted on the NFS server. You can either create additional exports, or subdirectories within an already exported directory, for each pod that needs NFS storage. Because NFS is capable of RWX access modes, both of the example applications below will use the same /drbd/exports/nfs-app subdirectory for their persistent volumes.
Comparing HA NFS in Kubernetes approaches
The following table is an at-a-glance comparison of the two different approaches to using HA NFS for Kubernetes with LINBIT software:
| Kubernetes-native RWX with LINSTOR (Operator v2.10.0+) | Disaggregated HA NFS cluster | |
|---|---|---|
| Setup complexity | Low (PVC with ReadWriteMany) |
Higher (NFS server + VIP address) |
| Kubernetes-native | Yes | Partial |
| External NFS access | No | Yes |
| DRBD Reactor configuration | Automatic | Manual |
| Legacy or multi-cluster support | No | Yes |
For legacy, multi-cluster, or niche use cases, the rest of this article describes how you can use a disaggregated HA NFS export in Kubernetes. Again, refer to the “Creating ReadWriteMany Persistent Volumes in Kubernetes by using LINSTOR” article for a Kubernetes-native approach.
Using NFS Storage in Kubernetes
There are two different ways for pods in Kubernetes to use NFS exported file systems. You can either specify the NFS share within the pod manifest, or define the NFS share as a PersistentVolume (PV) object that can be managed within Kubernetes.
It is a better practice to configure NFS shares as PVs within Kubernetes. This allows greater flexibility. Decoupling the storage from the pod means that you can more easily reuse or share the storage with another pod. However, for simple single-use test pods, defining the storage in the pod manifest is perfectly acceptable. This blog post will describe both ways of configuring NFS shares within Kubernetes.
For both “in-pod” and “in-PV” NFS export use to work, you’ll need to have NFS utilities installed on your Kubelets in order for the Kubelets to be able to bind mount the NFS exports:
For DEB-based Linux distributions:
apt install -y nfs-common
For RPM-based Linux distributions that use dnf:
dnf install -y nfs-utils
Defining NFS exports in pod manifest
Defining NFS exports directly in the pod manifest is simple. The following command creates a pod manifest that mounts the exported file system for its persistent storage:
cat << EOF > nfs-in-pod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: nfs-app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /data
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(hostname; date) >> /data/test-file.txt; sleep 30s; done"]
volumes:
- name: nfs-volume
nfs:
server: 192.168.222.25
path: /drbd/exports/nfs-app
EOF
📝 NOTE: You can’t specify NFS mount options in a pod spec. You can either set mount options server-side or use /etc/nfsmount.conf.
In the pod manifest created above, you can see the spec.volumes.0.nfs.server is set to the IP of the HA NFS cluster’s VIP address. Also, the spec.volumes.0.nfs.path is set to the path of the exported subdirectory where the pod will store its data.
Creating the pod in Kubernetes from the pod manifest is a single command:
kubectl apply -f nfs-in-pod.yaml
Shortly after creating the pod in Kubernetes, verify that the pod is writing data to the test file stored on the HA NFS export:
kubectl exec nfs-in-a-pod -- cat /data/test-file.txt
Output should be similar to this:
nfs-in-a-pod Tue Mar 25 19:21:35 UTC 2026
nfs-in-a-pod Tue Mar 25 19:22:05 UTC 2026
nfs-in-a-pod Tue Mar 25 19:22:35 UTC 2026
Defining NFS exports in PV manifest
Defining NFS exports as PVs is the preferred approach to managing storage in Kubernetes. There are a few more steps involved, but the granularity allows for greater flexibility and abstraction between the application and its storage.
Define the NFS exported directory in a PV manifest by entering the following command:
cat << EOF > nfs-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.222.25
path: /drbd/exports/nfs-app
EOF
⚠️ WARNING: You can mount NFS volumes by using PersistentVolumes while setting NFS mount options. Setting theÂ
spec.mountOptions option in your PV manifest will achieve this. However, mount options are not validated by Kubernetes. This could lead to application-level errors if you do not use correct mount options. Verify mount options before deploying an HA NFS solution into production.
You can see the greater flexibility and control you have in the options supported by PV objects in Kubernetes.
spec.capacity.storage sets the size of the volume, which is compared when a user creates a persistent volume claim (PVC).spec.accessModes is a list of supported access modes for the volume.ReadOnlyMany: can be mounted as read-only by many nodes.ReadWriteOnce: can be mounted as read/write by a single node.ReadWriteMany: can be mounted as read/write by many nodes.ReadWriteOncePod: can be mounted as read/write by a single pod.
spec.persistentVolumeReclaimPolicy dictates what happens to the data in the volume once a PVC is deleted.Retain: manual reclamationRecycle: basic scrub (rm -rf /path/*)Delete: delete the volume
⚠️ WARNING: The spec.volumes.0.nfs.server and spec.volumes.0.nfs.path keys are set to the same values as used in the previous nfs-in-a-pod pod example. This is something you should avoid doing unless you intend to share data between the different pods, as we will do in this example.
Create the PV in Kubernetes by entering the following command:
kubectl apply -f nfs-pv.yaml
With the PV created, users can now request a claim on a PV that satisfies the request. The following command creates a PVC that matches the characteristics of the PV created in the previous steps:
cat << EOF > nfs-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-app
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
EOF
Create the PVC in Kubernetes and check its status by entering the commands below:
kubectl apply -f nfs-pvc.yaml
kubectl get pvc
The output shows that the PVC is now bound to the PV:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-app Bound nfs-pv 4Gi RWX 120m
Finally, create the pod manifest specifying the nfs-app PVC as the pod’s persistent storage volume:
cat << EOF > nfs-in-pv.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pv
spec:
containers:
- name: nfs-app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /data
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(hostname; date) >> /data/test-file.txt; sleep 30s; done"]
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: nfs-app
EOF
Create the pod within Kubernetes:
kubectl apply -f nfs-in-pv.yaml
With both pods running, output from a kubectl exec nfs-in-a-pod -- tail /data/test-file.txt command shows that the test file they’re appending their $(hostname; date) is being written to by both pods:
nfs-in-a-pv Tue Mar 25 21:00:05 UTC 2026
nfs-in-a-pod Tue Mar 25 21:00:08 UTC 2026
nfs-in-a-pv Tue Mar 25 21:00:35 UTC 2026
nfs-in-a-pod Tue Mar 25 21:00:38 UTC 2026
nfs-in-a-pv Tue Mar 25 21:01:05 UTC 2026
nfs-in-a-pod Tue Mar 25 21:01:08 UTC 2026
nfs-in-a-pv Tue Mar 25 21:01:35 UTC 2026
nfs-in-a-pod Tue Mar 25 21:01:38 UTC 2026
nfs-in-a-pv Tue Mar 25 21:02:05 UTC 2026
nfs-in-a-pod Tue Mar 25 21:02:08 UTC 2026
While the outcome is the same for both methods, the flexibility gained by using PV and PVC makes it the better choice. Because you created the PV and PVC with ReadWriteMany as an accessMode, you can create any number of pods that reference the nfs-app as their persistentVolumeClaim.claimName without having to specify any of the NFS server or export specifics in the pod spec.
Conclusion
LINBIT develops a purpose built LINSTOR Operator and CSI driver for Kubernetes, but sometimes an application or DevOps team requires “good old” NFS backed storage. However, NFS on its own does not offer high availability clustering, and for that LINBIT software and support has you covered.
With an HA NFS cluster built and supported by LINBIT, an NFS node failure is completely transparent to the applications in Kubernetes storing their data on it. Failover of NFS services from one cluster node to another happens in only a few seconds, which to the Kubernetes pods resembles a hiccup on the network. If you have questions about any of the ways that LINBIT supports HA NFS clusters for Kubernetes, reach out and ask. Similar to the clusters we build for our customers, we’re highly available.
Changelog
2026-03-31:
- Updated article w/ Kubernetes-native RWX support in the LINSTOR Operator since v2.10.0.
- Made minor language and formatting touch-ups.
2024-06-27:
- Originally published article.