We are currently working on merging DRBD-9.3 into the Linux upstream. But how did we get here? DRBD-8.4 works only with a single peer; it is a software for two-node failover clusters. And the DRBD that comes with the upstream Linux is equivalent to drbd-8.4. The most recent release was 8.4.11, eight years ago. However, the development of drbd-9.x started in October 2011. It was like a rewrite, in place. Over the course of the journey, we had to touch on every line of code to introduce the concept that a cluster can have multiple peers.
As we introduced these radical changes, we broke existing APIs, intending to address that later.
Within the last year, we worked on that backlog, upgrading the in-tree upstream kernel DRBD from 8.4 to 9 while keeping it compatible with 20-year-old user-space programs and assumptions. E.g., about the content of the virtual file /proc/drbd.
The project has reached the stage where we are ready to step out of our cozy office in Vienna, Austria, and begin discussing the planned changes with Linux upstream maintainers. The editor of Phoronix noted this development with the headline, ‘DRBD Driver Working To Land ~15 Years Worth Of Changes Into The Linux Kernel’. We are already seeing the first benefits. Our code has started moving through the Linux upstream development machinery. On its course, it gets exposed via the ‘next’ tree. Others review it and run their static analyzers, exotic-architecture build tests, exotic kernel config tests, and other automated tooling I have never heard of before. Improvements are being rolled back into our code tree.
There is still a ton of work to do. In the most optimistic case, Linux 7.2, which is planned for release in Sept/Oct 2026, could be the first to include drbd-9.
Moving over to the latest LINBIT content, we’ve updated ‘The Benefits of High Availability (HA)’ on the LINBIT blog, making general additions as well as a particular focus on LINSTOR in Kubernetes.
Still on that topic, we’ve published a new blog post, ‘Highly Available NFS Storage using LINBIT HA for Kubernetes’. With an HA NFS cluster built and supported by LINBIT, an NFS node failure is completely transparent to the applications in Kubernetes storing their data on it. Failover of NFS services from one cluster node to another happens in only a few seconds, which, to Kubernetes pods, resembles a hiccup on the network.
The 45Drives video ‘DRBD Performance Test: Storage Performance on Stornado F16 All-Flash Servers with LINBIT’ is now live. In this deep-dive technical collaboration video, we test the performance of DRBD (Distributed Replicated Block Device) specifically for the Stornado F16 all-flash NVMe servers.
Since I last wrote, we’ve released a new version of LINSTOR GUI: v2.4.0 and linstor-server 1.33.2.