Highly Available NFS Storage using LINBIT HA for Kubernetes

While NFS (Network File System) isn’t a new or exciting technology, it is one of the more common storage solutions for Kubernetes clusters that require persistent storage. Particularly, applications or pods requiring shared, network-based, “read write many” (RWX) access to storage in Kubernetes can benefit from the familiar storage solution.

LINBIT® builds highly available NFS clusters for clients using commercial off-the-shelf (COTS) hardware and open source software. This means organizations can avoid purchasing expensive NAS or SAN hardware which are usually bundled with proprietary software. HA NFS clusters can also be shared between many platforms on the same network. This gives you greater flexibility, for example, when compared to a hyperconverged storage solution that might be confined to providing storage to the systems or platform it’s installed on. This makes an HA NFS cluster a cost-effective, flexible, and relatively simple storage solution for Kubernetes deployments.

How LINBIT Supports HA NFS Clustering For Kubernetes

LINBIT’s software supports HA NFS clustering in many ways, with DRBD® existing at the core of almost all of the projects and products LINBIT supports and develops. DRBD is the block storage driver that enables synchronous replication of data between cluster nodes. DRBD ensures that you have multiple identical copies of your data on more than one cluster node, enabling HA at the storage level.

LINBIT VSAN combines DRBD®, LINSTOR®, and a web front end into a Linux distribution that can be used to create HA NFS exports (and more). The LINBIT VSAN appliance can be installed on hardware or inside of virtual machines, and gives system admins and engineers an “easy mode” for creating HA storage clusters which can be used as Kubernetes persistent storage.

DRBD Reactor is a LINBIT developed cluster resource manager (CRM) that can make almost any service that uses a DRBD device for its storage highly available. DRBD Reactor watches the quorum state of the DRBD devices it manages, and can promote a DRBD device and start services that depend on that DRBD device when necessary to keep that service running in the cluster. HA NFS is one of the common services that you can manage with DRBD Reactor.

Finally, LINBIT also supports Pacemaker, the community developed CRM for building HA clusters. In a Pacemaker cluster, DRBD replicates the data, while Pacemaker manages where services are running in the cluster. Pacemaker is highly configurable and because of this has a higher learning curve for configuration and management. LINBIT’s DRBD Reactor is a simpler solution, but sometimes the need for complex ordering and collocation makes Pacemaker a better fit for building HA NFS clusters for Kubernetes.

With any of the HA NFS solutions listed above, you’ll end up with HA NFS exports accessible over a virtual IP (VIP) address. This address is able to “float” around the cluster, and will be assigned wherever your NFS export is active. In the Kubernetes YAML manifests below, the VIP configured from our HA NFS cluster will be 192.168.222.25/24, and the exported file system root export will be /drbd/exports/, which is where the DRBD device is mounted on the NFS server. You can either create additional exports, or subdirectories within an already exported directory, for each pod that needs NFS storage. Because NFS is capable of RWX access modes, both of the example applications below will use the same /drbd/exports/nfs-app subdirectory for their persistent volumes.

Using NFS Storage in Kubernetes

There are two different ways for pods in Kubernetes to use NFS exported file systems. You can either specify the NFS share within the pod manifest, or define the NFS share as a PersistentVolume (PV) object that can be managed within Kubernetes.

It is a better practice to configure NFS shares as PVs within Kubernetes. This allows greater flexibility. Decoupling the storage from the pod means that you can more easily reuse or share the storage with another pod. However, for simple single-use test pods, defining the storage in the pod manifest is perfectly acceptable. This blog post will describe both ways of configuring NFS shares within Kubernetes.

For both “in-pod” and “in-PV” NFS export use to work, you’ll need to have NFS utilities installed on your Kubelets in order for the Kubelets to be able to bind mount the NFS exports:

For DEB based Linux distributions:

# apt install -y nfs-common

For RPM based Linux distributions:

# dnf install -y nfs-utils

Defining NFS Exports in Pod Manifest

Defining NFS exports directly in the pod manifest is simple. The following command creates a pod manifest that mounts the exported file system for its persistent storage:

# cat << EOF > nfs-in-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nfs-in-a-pod
spec:
  containers:
    - name: nfs-app
      image: alpine
      volumeMounts:
        - name: nfs-volume
          mountPath: /data
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo \$(hostname; date) >> /data/test-file.txt; sleep 30s; done"]
  volumes:
    - name: nfs-volume
      nfs:
        server: 192.168.222.25
        path: /drbd/exports/nfs-app
EOF
`

đź“ť NOTE: You can’t specify NFS mount options in a pod spec. You can either set mount options server-side or use /etc/nfsmount.conf.

In the pod manifest created above, you can see the spec.volumes.0.nfs.server is set to the IP of the HA NFS cluster’s VIP address. Also, the spec.volumes.0.nfs.path is set to the path of the exported subdirectory where the pod will store its data.

Creating the pod in Kubernetes from the pod manifest is a single command:

# kubectl apply -f nfs-in-pod.yaml

Shortly after creating the pod in Kubernetes, you should see data being written to the test file stored on the HA NFS export:

# kubectl exec nfs-in-a-pod -- cat /data/test-file.txt
nfs-in-a-pod Tue Jun 25 19:21:35 UTC 2024
nfs-in-a-pod Tue Jun 25 19:22:05 UTC 2024
nfs-in-a-pod Tue Jun 25 19:22:35 UTC 2024

Defining NFS Exports in PV Manifest

Defining NFS exports as PVs is the preferred approach to managing storage in Kubernetes. There are a few more steps involved, but the granularity allows for greater flexibility and abstraction between the application and its storage.

Define the NFS exported directory in a PV manifest by entering the following command:

cat << EOF > nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 192.168.222.25
    path: /drbd/exports/nfs-app
EOF

⚠️ WARNING: You can mount NFS volumes by using PersistentVolumes while setting NFS mount options. Setting the spec.mountOptions option in your PV manifest will achieve this. However, mount options are not validated by Kubernetes. This could lead to application-level errors if you do not use correct mount options.

You can see the greater flexibility and control you have in the options supported by PV objects in Kubernetes.

  • spec.capacity.storage sets the size of the volume, which is compared when a user creates a persistent volume claim (PVC).
  • spec.accessModes is a list of supported access modes for the volume.
    • ReadOnlyMany: can be mounted as read-only by many nodes.
    • ReadWriteOnce: can be mounted as read/write by a single node.
    • ReadWriteMany: can be mounted as read/write by many nodes.
    • ReadWriteOncePod: can be mounted as read/write by a single pod.
  • spec.persistentVolumeReclaimPolicy dictates what happens to the data in the volume once a PVC is deleted.
    • Retain: manual reclamation
    • Recycle: basic scrub (rm -rf /path/*)
    • Delete: delete the volume

⚠️ WARNING: The spec.volumes.0.nfs.server and spec.volumes.0.nfs.path keys are set to the same values as used in the previous nfs-in-a-pod pod example. This is something you should avoid doing unless you intend to share data between the different pods, as we will do in this example.

Create the PV in Kubernetes by entering the following command:

# kubectl apply -f nfs-pv.yaml

With the PV created, users can now request a claim on a PV that satisfies the request. The following command creates a PVC that matches the characteristics of the PV created in the previous steps:

# cat << EOF > nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-app
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 4Gi
EOF

Create the PVC in Kubernetes and check its status by entering the commands below:

# kubectl apply -f nfs-pvc.yaml
# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-app   Bound    nfs-pv   4Gi        RWX                           120m

The output shows that the PVC is now bound to the PV. Finally, create the pod manifest specifying the nfs-app PVC as the pod’s persistent storage volume:

# cat << EOF > nfs-in-pv.yaml
kind: Pod
apiVersion: v1
metadata:
  name: nfs-in-a-pv
spec:
  containers:
    - name: nfs-app
      image: alpine
      volumeMounts:
        - name: nfs-volume
          mountPath: /data
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo \$(hostname; date) >> /data/test-file.txt; sleep 30s; done"]
  volumes:
    - name: nfs-volume
      persistentVolumeClaim:
        claimName: nfs-app
EOF

Create the pod within Kubernetes:

# kubectl apply -f nfs-in-pv.yaml

With both pods running, the test file they’re appending their $(hostname; date) command output to is being written to by both pods:

# kubectl exec nfs-in-a-pod -- tail /data/test-file.txt
nfs-in-a-pv Tue Jun 25 21:00:05 UTC 2024
nfs-in-a-pod Tue Jun 25 21:00:08 UTC 2024
nfs-in-a-pv Tue Jun 25 21:00:35 UTC 2024
nfs-in-a-pod Tue Jun 25 21:00:38 UTC 2024
nfs-in-a-pv Tue Jun 25 21:01:05 UTC 2024
nfs-in-a-pod Tue Jun 25 21:01:08 UTC 2024
nfs-in-a-pv Tue Jun 25 21:01:35 UTC 2024
nfs-in-a-pod Tue Jun 25 21:01:38 UTC 2024
nfs-in-a-pv Tue Jun 25 21:02:05 UTC 2024
nfs-in-a-pod Tue Jun 25 21:02:08 UTC 2024

While the outcome is the same for both methods, the flexibility gained by using PV and PVC makes it the better choice. Because you created the PV and PVC with ReadWriteMany as an accessMode, you can create any number of pods that reference the nfs-app as their persistentVolumeClaim.claimName without having to specify any of the NFS server or export specifics in the pod spec.

Final Thoughts

LINBIT develops a purpose built LINBIT SDS operator and CSI driver for Kubernetes, but sometimes an application or DevOps team requires “good old” NFS backed storage. However, NFS on its own does not offer high availability clustering, and for that LINBIT’s software and support has you covered.

With an HA NFS cluster built and supported by LINBIT, an NFS node failure is completely transparent to the applications in Kubernetes storing their data on it. Failover of NFS services from one cluster node to another happens in only a few seconds, which to the Kubernetes pods resembles a hiccup on the network. If you have questions about any of the ways that LINBIT supports HA NFS clusters for Kubernetes, reach out and ask. Similar to the clusters we build for our customers, we’re highly available.

Matt Kereczman

Matt Kereczman

Matt Kereczman is a Solutions Architect at LINBIT with a long history of Linux System Administration and Linux System Engineering. Matt is a cornerstone in LINBIT's technical team, and plays an important role in making LINBIT and LINBIT's customer's solutions great. Matt was President of the GNU/Linux Club at Northampton Area Community College prior to graduating with Honors from Pennsylvania College of Technology with a BS in Information Security. Open Source Software and Hardware are at the core of most of Matt's hobbies.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.