DRBD 9.2.16 and 9.3 are released. While 9.2.16 is the boring bugfix release, drbd-9.3 brings exciting new features.
The newest DRBD update now supports the out-of-sync bitmap at different granularities. For the last 25 years, it has been fixed that one bit in that bitmap represents 4096 bytes of storage. That bitmap exists in the metadata area on disk and in RAM. We have two motivations for using a higher bitmap block size, e.g. 8KiB or 16KiB. First, reduced disk space consumption allows you to fit more bitmaps for more peers in the same space. This can unlock you if you are stuck with internal metadata, a fixed-sized backing storage device, and a filesystem you can not shrink (e.g., XFS), and you want to add more nodes.
For users with large storage devices that use DRBD across large fleets, the reduced memory consumption might be interesting, as it frees RAM across the fleet.
To our pleasant surprise, we had foreseen the field for the bitmap block size in the metadata many years ago. However, it took until today to finally implement that functionality. In addition, it supports connecting nodes with different bitmap block sizes.
The other new feature is the resync without replication, which became the default resync mode for drbd-9.3.x. For most workloads, it speeds up resync time and reduces I/O latency during the resync.
And both new releases are compatible with the recently released Linux 6.17. Proxmox VE has already adopted that kernel, putting us under a bit of pressure to release a compatible DRBD.
For readers who would like to see the impact of our software, we published a new case study on our website, showing how LINBIT SDS helped triple operations for RTM in the financial sector.
In previous newsletters, I have discussed the significant update for Kubernetes users with LINSTOR Operator v2.10.0, namely the ability for them to create ReadWriteMany (RWX) Persistent Volume Claims (PVCs) directly using the LINSTOR CSI driver. Since that update, the team has published a blog post offering an overview, titled ‘Creating ReadWriteMany Persistent Volumes in Kubernetes by using LINSTOR.’
Other recent additions to the LINBIT blog include ‘Create a Highly Available iSCSI Target Using a LINSTOR Gateway Cluster’ which demonstrates how to set up a highly available iSCSI target by using LINSTOR Gateway, ‘Controlling LINSTOR Replica Placement During Node Evacuation,’ and ‘Securing the LINSTOR Encryption Passphrase By Using systemd-creds and TPM 2.0.’
In addition to the blog, we have also published two new articles on the LINBIT Knowledge Base. ‘The Tradeoff Between Striping Disks and Increasing the Failure Domain When Using LINSTOR’ focuses on getting the maximum performance from the available hardware in your deployments, and ‘Understanding and Mitigating DRBD Two-phase Commit Timeout Messages’ for our DRBD users.
Regarding software updates, we recently released python-linstor/linstor-client 1.27.0, LINSTOR Operator 2.10.1, which is a small bug fix for our Kubernetes Operator, and linstor-proxmox v8.1.4.