DRBD Client (Diskless Mode) Performance

This blog post will introduce a new feature and examine the performance of DRBD® Diskless. DRBD 9 introduces an interesting new feature; DRBD client nodes. With this feature, you can have a DRBD node which has no local replica itself. Essentially, the client node simply reads and writes to the storage of the peer systems using the DRBD connections.

As DRBD’s communication is designed to operate with minimal overhead we thought it might be interesting to see how it compares to another popular solution for network-attached block storage; iSCSI. For both tests, the backend storage is a Highly Available DRBD cluster with two synchronous replicas of the data.

For this test, we utilized KVM virtual machines. We used a 1GiB RAM disk as the backing storage for the DRBD cluster. Replication and client connections were all done via the same Emulex OneConnect 10Gb interface. We measured the performance of the DRBD (Diskless) storage to have a throughput of 98.2MiB/s while running at 25160 IOPs. Not too bad for a purely random workload.

fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/drbd100 \
--name=test0 --group_reporting

Please note that there was no additional tuning done with regard to the network or storage stack. There was no additional tuning for iSCSI or DRBD either. For this test, everything remained default per CentOS Linux 7.2.1511. The DRBD version used was 9.0.3-1 with the 0.96.1 version of DRBDmanage.

We performed only three short benchmarks. The first of which was a simple throughput test via dd.

dd if=/dev/zero of=/dev/drbd100 bs=1M count=1024 oflag=direct
DRBD Diskless

When writing 1MiB Blocks sequentially we see quite an improvement with DRBD. Pretty much on par with what the back-end storage cluster can handle.

Random read IO was the next test. For these tests, we leveraged the Flexible IO tester.

fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=test0 \
--group_reporting
DRBD Diskless

DRBD Diskless Shows Substantial Performance Improvements

Next, we performed the same test, but with a random write workload

fio --readwrite=randwrite --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=test0 \
--group_reporting
DRBD Diskless

Again, the reduced overhead of the DRBD native communications is very much apparent.

While the DRBD transport seems to perform better in these tests. There are still some use cases where one might want to choose iSCSI, although, you might want to at least consider using a diskless DRBD client instead.

Devin Vance

Devin Vance

First introduced to Linux back in 1996, and using Linux almost exclusively by 2005, Devin has years of Linux administration and systems engineering under his belt. He has been deploying and improving clusters with LINBIT since 2011. When not at the keyboard, you can usually find Devin wrenching on an American motorcycle or down at one of the local bowling alleys.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.