DRBD Client (diskless mode) Performance

DRBD 9 introduces an interesting new feature; DRBD® client nodes. With this feature you can have a DRBD node which has no local replica itself. Essentially, the client node simply reads and writes to the storage of the peer systems using the DRBD connections.

As DRBD’s communication is designed to operate with minimal overhead I thought it might be interesting to see how it compares to another popular solution for network attached block storage; iSCSI. For both test the backend storage is a Highly Available DRBD cluster with two synchronous replicas of the data.

For this test we utilized KVM virtual machines. A 1GiB RAM disk was used as the backing storage for the DRBD cluster. Replication and client connections were all done via the same Emulex OneConnect 10Gb interface. The performance of the DRBD storage was measured to have a throughput of 98.2MiB/s while running at 25160 IOPs. Not too bad for a purely random workload.[1. fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/drbd100 --name=t
est0 --group_reporting
]

Please note that there was no additional tuning done with regards to the network or storage stack. No additional tuning was done for iSCSI or DRBD either. For this test everything was left as default per CentOS Linux 7.2.1511. The DRBD version used was 9.0.3-1 with the 0.96.1 version of DRBDmanage.

Only three short benchmarks were performed. The first of which was a simple throughput test via dd. [1. dd if=/dev/zero of=/dev/drbd100 bs=1M count=1024 oflag=direct].

When writing 1MiB Blocks sequentially we see quite the improvement with DRBD. Pretty much on par with what the back end storage cluster can handle.

The next test was random read IO. For these test we leveraged the Flexible IO tester.[1. fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=t
est0 --group_reporting
].

Again, we see substantial performance improvements when using a DRBD diskless client.

Next we performed the same test, but with a random write workload [1. fio –readwrite=randwrite –runtime=30s –blocksize=4k –ioengine=libaio –numjobs=16 –iodepth=16 –direct=1 –filename=/dev/sdb –name=
test0 –group_reporting].

Again, the reduced overhead of the DRBD native communications is very much apparent.

While the DRBD transport seems to perform better in these test. There are still some use cases where one might want to choose iSCSI, although, you might want to at least consider using a diskless DRBD client instead.

Share on facebook
Share on twitter
Share on linkedin
Share on reddit
Share on whatsapp
Share on vk
Share on email

Share this post

Devin Vance

Devin Vance

First introduced to Linux back in 1996, and using Linux almost exclusively by 2005, Devin has years of Linux administration and systems engineering under his belt. He has been deploying and improving clusters with LINBIT since 2011. When not at the keyboard, you can usually find Devin wrenching on an American motorcycle or down at one of the local bowling alleys.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.