Upgrading DRBD Has Never Been Easier

As DRBD® development continues to move forward with the new and exciting 9.2.0 release, it has never been easier to upgrade to the latest version of DRBD. DRBD’s wire-protocol compatibility has recently expanded, enabling replication between DRBD 8.4 and the latest versions of DRBD 9.

Upgrading From DRBD 9

Upgrading from DRBD 9 to a newer version of DRBD 9 has always been a fairly easy process. All DRBD 9 versions also use the same v09 metadata format. There are no additional steps to perform during upgrades.

For full upgrade instructions and considerations, see our user’s guide section on upgrading here.

Upgrading From DRBD 8.4

There are still many deployments using DRBD 8.4 even though DRBD 9 was initially released back in 2015. As more time passes, and DRBD 9 eventually gets upstreamed in the Linux kernel, more and more clusters will be moving away from DRBD 8.4.

For a seamless migration with minimal downtime, upgrading from DRBD 8.4 to DRBD 9 previously required upgrading specifically to the 9.0 branch. It was formerly impossible to replicate data between versions 8.4 and 9.1 or greater. This strict enforcement has since been removed with the recent release of DRBD 9.1.8 which introduced expanded compatibility between differing versions. Upgrading the entire cluster was always possible before, but replication between the nodes was not possible until both nodes were upgraded.

Upgrade Scenario – DRBD 8.4.11 to DRBD 9.2.0

As a preliminary step, observe the output from /proc/drbd as well as drbdadm status. The output shows both nodes Connected and UpToDate on DRBD 8.4.11:

root@nfs-0:~# cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
srcversion: 47FA85BFAC6D4BE43DE6601
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:8388316 nr:0 dw:0 dr:8388548 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
root@nfs-0:~# drbdadm status r0
r0 role:Primary
  disk:UpToDate
  peer role:Secondary
    replication:Established peer-disk:UpToDate

Begin upgrading one node at a time, starting with the current Secondary. This involves bringing down the DRBD resource r0, unloading the old 8.4.11 kernel module and installing the new kernel module.

Once you’ve upgraded the DRBD kernel module on the Secondary, upgrading from DRBD 8.4 to DRBD 9 requires a metadata conversion:

root@nfs-1:~# drbdadm create-md r0
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v08 flexible-size internal meta data block
already in place on /dev/vdb at byte offset 8589930496

Valid v08 meta-data found, convert to v09?
[need to type 'yes' to confirm] yes

md_offset 8589930496
al_offset 8589897728
bm_offset 8589635584

Found xfs filesystem
     8388060 kB data area apparently used
     8388316 kB left usable by current configuration

Even though it looks like this would place the new meta data into
unused space, you still need to confirm, as this is only a guess.

Do you want to proceed?
[need to type 'yes' to confirm] yes

Writing meta data...
New drbd meta data block successfully created.
success

After confirming the prompts above and converting the metadata on the upgraded node, the DRBD resource is brought up and connects to the peer node. Running drbdadm status shows us that the two nodes are now connected regardless of the large disparity in DRBD versions between the nodes:

root@nfs-1:~# drbdadm up r0
root@nfs-1:~# drbdadm status
r0 role:Secondary
  disk:UpToDate
  nfs-0 role:Primary
    peer-disk:UpToDate

Checking /proc/drbd output verifies the currently loaded DRBD module version (9.2.0) on the upgraded node:

root@nfs-1:# cat /proc/drbd
version: 9.2.0 (api:2/proto:86-121)
GIT-hash: 71e60591f3d7ea05034bccef8ae362c17e6aa4d1 build by root@nfs-1, 2022-10-11 18:13:34
Transports (api:18): tcp (9.2.0)

NOTE: /proc/drbd no longer contains information regarding DRBD resource status in DRBD 9.

Further inspecting DRBD events in the system logs, the snippet below confirms the DRBD resource r0 moves into a Connected state after agreeing on an older protocol version compatible with DRBD 8.4:

root@nfs-0:~# journalctl | grep drbd
[...]
Oct 11 18:18:08 nfs-1 kernel: drbd r0 nfs-0: conn( Unconnected -> Connecting )
Oct 11 18:18:08 nfs-1 kernel: drbd r0 nfs-0: Handshake to peer 0 successful: Agreed network protocol version 101
Oct 11 18:18:08 nfs-1 kernel: drbd r0 nfs-0: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES
Oct 11 18:18:08 nfs-1 kernel: drbd r0 nfs-0: conn( Connecting -> Connected )

At this point in the upgrade process, the newly upgraded node would now be promoted to Primary as cluster resources are seamlessly migrated over. The peer node should then be upgraded in the same manner.

Moving Forward

Previously, DRBD only allowed connections between two adjacent minor versions such as 8.4.x and 9.0.x. With DRBD’s newly expanded wire-protocol compatibility, you can upgrade your DRBD 8.4 clusters effortlessly. Additionally, DRBD upgrades that skip over a minor version, such as 9.0.x to 9.2.x also benefit from the expanded compatibility.

If you’re considering upgrading to DRBD 9, please see our user’s guide for noteworthy changes compared to DRBD 8.4.

Ryan Ronnander

Ryan Ronnander

Ryan Ronnander is a Solutions Architect at LINBIT with over 15 years of Linux experience. While studying computer science at Oregon State University he developed a passion for open source software that continues to burn just as brightly today. Outside of tech, he's also an avid guitar player and musician. You'll find him often immersed in various hobbies including, but not limited to occasional music projects, audio engineering, wrenching on classic cars, and finding balance in the great outdoors.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.